Confessions of a QUANT

I have a confession to make. I am a QUANT.

By a QUANT, I mean that I am good at quantitative methods and, because I’m good at them, I tend to gravitate to them. I simply prefer doing quantitative studies. I conduct mostly mixed methods studies, but I tend to leave the qualitative work to the experts. I don’t think I would say I’m post-positivist and like to say I’m pragmatist (I am constantly reflecting on whether this is true for me), but it is clear to me that I lean more towards quantitative methods.

I confess to this because being a QUANT used to make me feel guilty. I have always understood and respected the power of qualitative work, but because I am much better at quantitative work I have never really worked at improving my skills in qualitative work. I would go so far to say that I am a very poor qualitative evaluator. It’s a lot easier to hire a qualitative expert than try to do it myself.

I am not sure being a QUANT is entirely my fault. My graduate school requires a full year of statistics our first year (intermediate stats, ANOVA, regression, and categorical data analysis) and has offered numerous extra statistics courses beyond that which I ate up and even TA’d for (e.g., multivariate statistics, factor analysis, SEM, MLM, IRT). On the other hand, they only have one qualitative class (which has only been offered three times during my time at the school and I never had the opportunity to take it) and a new mixed methods class (which I did take, but the class did not teach qualitative methods).

Despite being a QUANT, I am going to try to improve my qualitative skills while I am still in graduate school. My dissertation is going to be a sequential explanatory mixed methods design, with a whopping forty interviews at the second stage. I am leading a project that is heavily qualitative and has required me to learn better coding strategies and how to calculate interrater reliability. I’m doing more of the qualitative work in my evaluations rather than leaving it to the experts.

It’s pained me to be so unknowledgeable about a topic that I should know more about, which is why I am committing to read Michael Quinn Patton’s book on Qualitative Research and Evaluation Methods (4th Ed) while my fellow students take the qualitative class in school. I’m two chapters in and we’re having great discussions on it. Stay tuned for a post in a month or two for my reflections on this endeavor.

Does This Logic Model Make My Program Look Good?

Over the past several years, data visualization has taken the evaluation community by storm. Today, there are dozens of blogs and online resources to help evaluators hop on the #dataviz train and communicate findings more effectively. The start of a new year is the perfect time to adopt new data visualization trends and apply them to your practice. However, before you jump on the bandwagon, it is worth testing assumptions about what works and what does not. That’s why we at the Claremont Evaluation Center decided to study the effectiveness of data visualization principles applied to logic models.

Read More

Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!

I am really looking forward to meeting you all at the annual AEA conference, Eval17! I wanted to share with you the details of my various presentations and hope you can make it to any of the ones that pique your interest! Continue reading “Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!”

Can evaluators be the bridge in the research-practice gap?

Researchers and practitioners agree that there is a gap between research (or theory) and practice. While the reasons for this gap are plentiful, they boil down to researchers and practitioners comprising two communities (Caplan, 1979) such that have different languages, values, reward systems, and priorities. The two communities try to bridge the gap through a variety of methods including producer-push models (e.g., knowledge transfer, knowledge translation, dissemination, applied research, interdisciplinary scholarship), user-pull models (e.g., evidence-based practice, practitioner inquiry, action research), and exchange models (e.g., research-practice partnerships and collaboratives, knowledge brokers, intermediaries). However, these methods typically focus on researchers or practitioners and do not consider other scholars that could fill this role. Continue reading “Can evaluators be the bridge in the research-practice gap?”

Dealing with my first journal article rejection

It was my first journal article submission (OK, second… my first, another article, was desk rejected). This article was my thesis that I’d been working on for two years. I’d originally written it up for journal publication, so once both of my readers signed it off, I sent it off to the primary journal in the field (American Journal of Evaluation) and waited.

And waited. Continue reading “Dealing with my first journal article rejection”