Confessions of a QUANT

I have a confession to make. I am a QUANT.

By a QUANT, I mean that I am good at quantitative methods and, because I’m good at them, I tend to gravitate to them. I simply prefer doing quantitative studies. I conduct mostly mixed methods studies, but I tend to leave the qualitative work to the experts. I don’t think I would say I’m post-positivist and like to say I’m pragmatist (I am constantly reflecting on whether this is true for me), but it is clear to me that I lean more towards quantitative methods.

I confess to this because being a QUANT used to make me feel guilty. I have always understood and respected the power of qualitative work, but because I am much better at quantitative work I have never really worked at improving my skills in qualitative work. I would go so far to say that I am a very poor qualitative evaluator. It’s a lot easier to hire a qualitative expert than try to do it myself.

I am not sure being a QUANT is entirely my fault. My graduate school requires a full year of statistics our first year (intermediate stats, ANOVA, regression, and categorical data analysis) and has offered numerous extra statistics courses beyond that which I ate up and even TA’d for (e.g., multivariate statistics, factor analysis, SEM, MLM, IRT). On the other hand, they only have one qualitative class (which has only been offered three times during my time at the school and I never had the opportunity to take it) and a new mixed methods class (which I did take, but the class did not teach qualitative methods).

Despite being a QUANT, I am going to try to improve my qualitative skills while I am still in graduate school. My dissertation is going to be a sequential explanatory mixed methods design, with a whopping forty interviews at the second stage. I am leading a project that is heavily qualitative and has required me to learn better coding strategies and how to calculate interrater reliability. I’m doing more of the qualitative work in my evaluations rather than leaving it to the experts.

It’s pained me to be so unknowledgeable about a topic that I should know more about, which is why I am committing to read Michael Quinn Patton’s book on Qualitative Research and Evaluation Methods (4th Ed) while my fellow students take the qualitative class in school. I’m two chapters in and we’re having great discussions on it. Stay tuned for a post in a month or two for my reflections on this endeavor.

Does This Logic Model Make My Program Look Good?

Over the past several years, data visualization has taken the evaluation community by storm. Today, there are dozens of blogs and online resources to help evaluators hop on the #dataviz train and communicate findings more effectively. The start of a new year is the perfect time to adopt new data visualization trends and apply them to your practice. However, before you jump on the bandwagon, it is worth testing assumptions about what works and what does not. That’s why we at the Claremont Evaluation Center decided to study the effectiveness of data visualization principles applied to logic models.

Read More

Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!

I am really looking forward to meeting you all at the annual AEA conference, Eval17! I wanted to share with you the details of my various presentations and hope you can make it to any of the ones that pique your interest! Continue reading “Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!”

Can evaluators be the bridge in the research-practice gap?

Researchers and practitioners agree that there is a gap between research (or theory) and practice. While the reasons for this gap are plentiful, they boil down to researchers and practitioners comprising two communities (Caplan, 1979) such that have different languages, values, reward systems, and priorities. The two communities try to bridge the gap through a variety of methods including producer-push models (e.g., knowledge transfer, knowledge translation, dissemination, applied research, interdisciplinary scholarship), user-pull models (e.g., evidence-based practice, practitioner inquiry, action research), and exchange models (e.g., research-practice partnerships and collaboratives, knowledge brokers, intermediaries). However, these methods typically focus on researchers or practitioners and do not consider other scholars that could fill this role. Continue reading “Can evaluators be the bridge in the research-practice gap?”

Evaluation is Not Applied Research

What is the difference between evaluation and research, especially applied research? For some, they are one and the same. Evaluation and research use the same methods, write the same types of reports, and come to the same conclusions. Evaluation is often described as applied research. For instance, here are some recent quotes describing what evaluation is: “Evaluation is applied research that aims to assess the worth of a service.” (Barker, Pistrang, & Elliott, 2016). “Program evaluation is applied research that asks practical questions and is performed in real-life situations.” (Hackbarth & Gall, 2005), and the current editor of the American Journal of Evaluation saying that “evaluation is applied research.” (Rallis, 2014). This is confusing for introductory evaluation students, particularly those coming from a research background or studying evaluation at a research institution. Continue reading “Evaluation is Not Applied Research”

Why aren’t evaluators adapting their evaluations to the developmental context?

Overall, my study found that evaluators are less likely to be participatory—both in the overall evaluation process and in data collection methods—when the program beneficiaries are children than when they are adults. Why is this the case? Continue reading “Why aren’t evaluators adapting their evaluations to the developmental context?”

How evaluators adapt their evaluations to the developmental context: Evaluation methods

Knowledge about children is best obtained directly from youth using interviews, focus groups and surveys. This is in stark contrast to past commonly used methods of observations and ethnography, which were primarily used because researchers did not believe youth could provide reliable and valid data.[1]

In my study, I examined whether evaluators collected data about beneficiaries directly (i.e., interviews, focus groups, surveys) or indirectly (i.e., case studies, observations, archival data). If evaluators indicated they would collect data directly from participants, I also asked them questions about their survey or interview-specific practices. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation methods”

How evaluators adapt their evaluations to the developmental context: Evaluation design

What evaluation design is best? This debate has raged through the field of evaluation on what constitutes credible evidence[1] with some arguing for RCTs as the “gold standard” and others questioning the superiority of the RCT.

This debate is somewhat meaningless when we understand that the evaluation design is chosen based on the evaluation questions. Evaluations seeking outcomes or impact are perhaps best served by an experimental (i.e., RCT) or quasi-experimental design whereas evaluations seeking the needs of the program and fidelity of implementation are better served by a descriptive (e.g., case study, observational) or correlation (e.g., cohort study, cross-sectional study) design. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation design”

How evaluators adapt their evaluations to the developmental context: Evaluation approach

As mentioned previously, developmentally appropriate evaluation requires a culturally appropriate evaluation in the context of youth programs. This means including youth, or at minimum those with knowledge and experience working with youth, in the evaluation.

In my study, I asked practicing evaluators to describe the levels of involvement there would be across a wide range of stakeholders including school administrators, teachers, parents, program staff, program designers, district personnel, funders, developmental consultants, math consultants, and the tutors and tutees of the program. In particular, I was interested in the levels of involvement of the consultants, the tutors, and the tutees across the evaluators randomly assigned to the child, adolescent, or adult conditions. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation approach”