Can evaluators be the bridge in the research-practice gap?

Researchers and practitioners agree that there is a gap between research (or theory) and practice. While the reasons for this gap are plentiful, they boil down to researchers and practitioners comprising two communities (Caplan, 1979) such that have different languages, values, reward systems, and priorities. The two communities try to bridge the gap through a variety of methods including producer-push models (e.g., knowledge transfer, knowledge translation, dissemination, applied research, interdisciplinary scholarship), user-pull models (e.g., evidence-based practice, practitioner inquiry, action research), and exchange models (e.g., research-practice partnerships and collaboratives, knowledge brokers, intermediaries). However, these methods typically focus on researchers or practitioners and do not consider other scholars that could fill this role.

As I will argue in the review paper for my dissertation, evaluators are in a prime position to bridge the gap between researchers and practitioners. Evaluation has been considered a transdiscipline in that it is an essential tool in all other academic disciplines (Scriven, 2008). Evaluators use social science (and other) research methodology and often have a specific area of content expertise, enabling them to bridge the gap to researchers. Furthermore, evaluation often requires a close relationship with practitioners to create evaluations that communicate in their language, speak to their values and priorities, and meet their needs to produce a useful evaluation, enabling them to also bridge the gap to practitioners. Evaluators can use their similarities with both researchers and practitioners to span the gap between researchers and practitioners as knowledge brokers or intermediaries (see figure).

However, while evaluators may span the bridge to researchers and practitioners individually, they may not be working to bridge the gap between researchers and practitioners. In a field that still debates the paradigm wars (e.g., the “gold standard” evaluation, qualitative versus quantitative data), the role of evaluators (e.g., as an advocate for programs), core competencies for evaluators, and professionalization of the evaluation field, it is unclear to what extent evaluators see their role encompassing bridging the research-practice gap and, if so, to what extent evaluators are actually working to bridge this gap and how they are doing so.

Stay tuned as I continue blogging about the review paper for my dissertation (i.e., the first chapter of my dissertation). I would sincerely appreciate any and all comments and criticism you may have. It will only strengthen my research and hopefully aid in my ultimate goal of informing the field of evaluation and improving evaluation practice.

Evaluation is Not Applied Research

What is the difference between evaluation and research, especially applied research? For some, they are one and the same. Evaluation and research use the same methods, write the same types of reports, and come to the same conclusions. Evaluation is often described as applied research. For instance, here are some recent quotes describing what evaluation is: “Evaluation is applied research that aims to assess the worth of a service.” (Barker, Pistrang, & Elliott, 2016). “Program evaluation is applied research that asks practical questions and is performed in real-life situations.” (Hackbarth & Gall, 2005), and the current editor of the American Journal of Evaluation saying that “evaluation is applied research.” (Rallis, 2014). This is confusing for introductory evaluation students, particularly those coming from a research background or studying evaluation at a research institution.

Others claim the distinction between evaluation and (applied) research is too hard to define. I do not disagree with this point. The boundaries between evaluation and research are fuzzy in many regards. Take, for instance, evaluation methodology. Our designs and methods are largely derived from social science methodology. However, as Mathison (2008) notes in her article on the distinctions between evaluation and research, evaluation has gone much further in the types of designs and methods it uses such as significant change technique, photovoice, cluster evaluation, evaluability assessment, and success case method. Scriven and Davidson have begun discussing evaluation-specific methodology (i.e., the methods distinct to evaluation), including needs and values assessment, merit determination methods (e.g., rubrics), importance weighting methodologies, evaluative synthesis methodologies, and value-for-money analysis (Davidson, 2013). These methods show that, while we indeed incorporate social science methodology, we are more than that and have unique methods beyond that.

This is no better illustrated than by the hourglass analogy provided by John LaVelle. The differences between research and evaluation are clear at the beginning and end of each process, but when it comes to the middle (methods and analysis), they are quite similar. Thus, evaluation differs from research in a multitude of ways. The following table should be interpreted with a word of caution. The table suggests clear delineations between research and evaluation, but as Mathison notes, many of the distinctions offered (e.g., evaluation particularizes while research generalizes) are not “singularly true for either evaluation or research.” (p. 189, 2008).

Area of difference Research Evaluation
Purpose Seek to generate new knowledge to inform the research base Seek to generate knowledge for a particular program or client
Who decides Researchers Stakeholders
What questions are asked Researchers formulate their own hypotheses Evaluators answer questions that the program is concerned with
Value judgments Research is value neutral Evaluators provide a value judgment
Action setting Basic research takes place in controlled environments Evaluation takes place in an action setting where few things can be controlled
Utility Research emphasizes “production of knowledge and leaves its use to the natural processes of dissemination and application.” (Weiss, 1997) Evaluation is concerned with use from the beginning
Publication Basic research is published in journals Evaluation is rarely published, typically only stakeholders can view the reports

I want to conclude by saying that if we are to call ourselves a transdscipline or an alpha discipline, like Scriven would argue we are, then we should work hard to differentiate ourselves from other disciplines, particularly basic and applied research. This may be difficult, particularly between applied research and evaluation, but we need to make these differences as explicit as possible, partly to help incoming evaluators in the field understand the differences (see EvalTalk for this repetitive question since 1998; Mathison, 2008) and partly to separate ourselves from research (and research from evaluation).