I have a confession to make. I am a QUANT.
By a QUANT, I mean that I am good at quantitative methods and, because I’m good at them, I tend to gravitate to them. I simply prefer doing quantitative studies. I conduct mostly mixed methods studies, but I tend to leave the qualitative work to the experts. I don’t think I would say I’m post-positivist and like to say I’m pragmatist (I am constantly reflecting on whether this is true for me), but it is clear to me that I lean more towards quantitative methods.
I confess to this because being a QUANT used to make me feel guilty. I have always understood and respected the power of qualitative work, but because I am much better at quantitative work I have never really worked at improving my skills in qualitative work. I would go so far to say that I am a very poor qualitative evaluator. It’s a lot easier to hire a qualitative expert than try to do it myself.
I am not sure being a QUANT is entirely my fault. My graduate school requires a full year of statistics our first year (intermediate stats, ANOVA, regression, and categorical data analysis) and has offered numerous extra statistics courses beyond that which I ate up and even TA’d for (e.g., multivariate statistics, factor analysis, SEM, MLM, IRT). On the other hand, they only have one qualitative class (which has only been offered three times during my time at the school and I never had the opportunity to take it) and a new mixed methods class (which I did take, but the class did not teach qualitative methods).
Despite being a QUANT, I am going to try to improve my qualitative skills while I am still in graduate school. My dissertation is going to be a sequential explanatory mixed methods design, with a whopping forty interviews at the second stage. I am leading a project that is heavily qualitative and has required me to learn better coding strategies and how to calculate interrater reliability. I’m doing more of the qualitative work in my evaluations rather than leaving it to the experts.
It’s pained me to be so unknowledgeable about a topic that I should know more about, which is why I am committing to read Michael Quinn Patton’s book on Qualitative Research and Evaluation Methods (4th Ed) while my fellow students take the qualitative class in school. I’m two chapters in and we’re having great discussions on it. Stay tuned for a post in a month or two for my reflections on this endeavor.
Image credit: Vladimir Kudinov http://www.vladimirkudinov.com/
This year, instead of yearly goals which are too long-term and fluffy to really mean anything or actually be accomplished, I focused on quarterly goals. This was long-term enough that I didn’t feel rushed to do something in a month but short-term enough that they were nearly all accomplished. For the first three quarters, I accomplished each goal I set out to do. The fourth quarter I was way too ambitious, but I still managed to get a lot done over the year:
- Finished my thesis and submitted for publication. It was rejected and asked to resubmit once I revised per their recommendations. I never got around to that and instead posted a pre-print on Thesis Commons.
- Submitted a second paper to a journal. It was rejected, and I have since revised and submitted to another journal. It’s awaiting peer review right now!
- Submitted a third paper to a journal. Unfortunately, after a three month wait, the editor informed us it didn’t fit their journal. I’m a bit bitter at that one, but my collaborator and I are slowly revising to submit to another journal. It required a complete workover, so it is taking us a long time, but it’s nearly there!
- Presented two sessions at AEA! One was on using vignettes to teach staff about program quality and conducting observations. The other was on how to survey children effectively. The second one especially was a big success and I’m very proud of it. I hope to turn it into a white paper someday! I also chaired my first two sessions ever and hope to continue that in subsequent conferences. You can view the slides and/or resources of these sessions (and all my other presentations, evaluations, and research publications) here.
- Presented at AERA and was accepted to present at next AERA. I’m really excited about what I will present at AERA18 as it is a beautiful SEM model that tests the Farrington model of non-cognitive factors and how they relate to academic performance.
- Found a consulting job working remotely as a statistics (and eventually as an evaluation) consultant. Funny enough, I spent weeks figuring out a plan to email a bunch of people asking to work for them remotely for an internship requirement at our school. Right when I was about to do it, my advisor said, “Dana, one of my clients needs a stats consultant and you’d be perfect for them!” Never underestimate the power of networking!
- Was a guest on a podcast! Actually, two podcasts. I was on Eval Cafe with Carolyn Camman and Brian Hoessler where we talked about Twitter for evaluators. Then a couple days later I was asked by my good friend Ryan Budds to be on his Trivia with Budds podcast where my husband and I competed head-to-head on the topics of psychology and Its Always Sunny in Philadelphia.
- Finished a bunch of evaluation reports. I’m continually working to refine my data visualization and report formatting skills. It’s getting better. This year I really focused on white space. In the past, I found myself sacrificing white space to get under 25 pages. This year, I’ve really tried to break out reports into sections as data comes in that allows me to answer evaluation questions. This helps a lot, but I’m proud that one of our projects we were able to chunk out the evaluation reports more explicitly in the proposal. I’m really excited to one day be the PI and not just the project manager!
- Figured out my dissertation topic. I wrote about some of the basic topics incorporated into my dissertation previously: the differences between evaluation and research and why evaluators might be better equipped to work with practitioners than researchers. The topic has shifted slightly, but it essentially will operationalize partnership relationships and examine the effect those have on practitioner use of evidence. We say a lot how much relationships matter, but what do they look like? How can we teach budding evaluators to have high quality relationships with their clients? And do they really matter in promoting evidence use, or is it more the involvement of practitioners in the process? I’m particularly excited because we are going to be applying for a research grant to study this, in which case I’ll get paid to do my dissertation!
Looking forward to 2018
Many of my goals for the first quarter of 2018 are remnants of Q4 goals that I didn’t really finish.
- Finish and submit publications. I have about four papers in the works, all with collaborators. They are taking a long time to finish (partly because everyone is equally busy and these are low in priority) but I am confident that we can get them completed. Two are remnants from last year’s goals but two others are papers that have been really fast-paced thanks to a highly motivated PI leading the projects.
- Finish my PhD portfolio and become a doctoral candidate. This means finalizing my review paper (e.g., the first chapter of my dissertation), finishing my internship hours, and taking orals. Then I’ll be ABD (all but dissertation) and considered a doctoral candidate instead of a doctoral student! My dissertation proposal will probably be signed off relatively quickly as well, especially if I get this research grant *fingers crossed*
- Post blog posts monthly. I feel like I have really slacked on blog posts this year, particularly towards the end of the year (although, funny enough I look back and I have 23 blog posts from 2017…). Still, I should have taken the advice I’ve heard multiple times and had six months of content before getting started. This would have reduced the stress of thinking I needed to get another blog post out! My new goal is to post a new blog post on the first Monday of each month. With this post, I am 1/12 towards my goal!
Most of all, I look forward to networking more on Twitter with my fellow evaluators, continuing to refine my evaluation practice, and working towards graduation!
I am really looking forward to meeting you all at the annual AEA conference, Eval17! I wanted to share with you the details of my various presentations and hope you can make it to any of the ones that pique your interest! Continue reading “Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!”
Researchers and practitioners agree that there is a gap between research (or theory) and practice. While the reasons for this gap are plentiful, they boil down to researchers and practitioners comprising two communities (Caplan, 1979) such that have different languages, values, reward systems, and priorities. The two communities try to bridge the gap through a variety of methods including producer-push models (e.g., knowledge transfer, knowledge translation, dissemination, applied research, interdisciplinary scholarship), user-pull models (e.g., evidence-based practice, practitioner inquiry, action research), and exchange models (e.g., research-practice partnerships and collaboratives, knowledge brokers, intermediaries). However, these methods typically focus on researchers or practitioners and do not consider other scholars that could fill this role. Continue reading “Can evaluators be the bridge in the research-practice gap?”
What is the difference between evaluation and research, especially applied research? For some, they are one and the same. Evaluation and research use the same methods, write the same types of reports, and come to the same conclusions. Evaluation is often described as applied research. For instance, here are some recent quotes describing what evaluation is: “Evaluation is applied research that aims to assess the worth of a service.” (Barker, Pistrang, & Elliott, 2016). “Program evaluation is applied research that asks practical questions and is performed in real-life situations.” (Hackbarth & Gall, 2005), and the current editor of the American Journal of Evaluation saying that “evaluation is applied research.” (Rallis, 2014). This is confusing for introductory evaluation students, particularly those coming from a research background or studying evaluation at a research institution. Continue reading “Evaluation is Not Applied Research”
What evaluation design is best? This debate has raged through the field of evaluation on what constitutes credible evidence with some arguing for RCTs as the “gold standard” and others questioning the superiority of the RCT.
This debate is somewhat meaningless when we understand that the evaluation design is chosen based on the evaluation questions. Evaluations seeking outcomes or impact are perhaps best served by an experimental (i.e., RCT) or quasi-experimental design whereas evaluations seeking the needs of the program and fidelity of implementation are better served by a descriptive (e.g., case study, observational) or correlation (e.g., cohort study, cross-sectional study) design. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation design”
Children and adults differ more than simply age; rather, they differ in culture as well. This recognition can be hard for evaluators: as we have all passed through childhood, it is easy to believe we have the same or greater knowledge of children’s culture than they do. Furthermore, our “spatial proximity to children may lead us to believe that we are closer to them than we really are—only different in that (adults claim) children are still growing up (‘developing’) and are often wrong (‘lack understanding’).” Continue reading “Developmental Appropriateness as Cultural Competence in Evaluation”
In a paper presented at the 2009 annual meeting of the American Evaluation Association, Tiffany Berry, Susan Menkes, and Katherine Bono discussed how evaluators could improve their practice through the developmental context. They argue that evaluators have spent years discussing how the program context (e.g., age of program, accessibility, size of program, timeline, political nature) and evaluation context (e.g., stakeholder involvement, method proclivity, measurement tools, purpose, use of results) affect the practice of evaluation. However, there has been little discussion on how the participants of a program, and particularly the age of participants, also affect the practice of evaluation. Thus, they describe what they call the Developmental Context and the three core developmental facets that define the development context. Continue reading “What is the developmental context? and why is it important to evaluation?”
In evaluation, one thing is clear: context matters. Many evaluators have described how the context of the program (e.g., age of program, type of program, feasibility) and the context of the evaluation (e.g., resources, stakeholder involvement, measurement tools) affect evaluation designs, methods, practices, and measures. However, evaluators have only begun to examine how the developmental context also affect how evaluators design and conduct evaluations. Specifically, how should the age of participants affect evaluations? Continue reading “Developmentally Appropriate Evaluations”