Research on Evaluation: It Takes a Village (The Problem)

Response rates from evaluators are poor. Despite research suggesting that AEA members consider research on evaluation as important, response rates for research on evaluation studies are often only between 10-30%.1

As evaluators ourselves, we understand how busy we can be. However, we believe that evaluators should spend more time contributing to these studies. These studies can be thought of as evaluations of our field, such as: what our current practices are, how should we train evaluators, what can we improve, how do our evaluations lead to social betterment, and more are just some of the broad questions these studies aim to answer. These studies can also help inform AEA efforts on the evaluation guiding principles and evaluator competencies.

Why are we seeing poor response rates?

  1. Response rates in general are poor. Across the world, response rates are declining. We are not unique in this regard. This phenomenon is happening in telephone, mailing, and internet surveys alike.
  2. Poorly constructed surveys. Unfortunately, some of this issue is probably within researchers themselves. They develop surveys that are too long or too confusing so evaluators drop out early from the study. For instance, Dana’s thesis had a 27% response rate but only 59% of participating evaluators finished the entire survey, which took participants a median 27 minutes to complete. To improve response and completion rates, a more succinct survey would have worked better.
  3. Evaluation anxiety. We often think about evaluation anxiety in our clients, but these research on evaluation studies flip the focus to ourselves. It may be anxiety-provoking for evaluators to introspect—or let other evaluators inspect—their own practices. As an example, participants in Deven’s research on UFE were asked to describe their approach to evaluation after selecting which “known” approaches they apply. Some participants explained that they did not know the formal name for their approach, or they just chose the one that sounded right. This could have been anxiety-provoking for participants and reduced their likelihood of participating or completion the study.
  4. Apathy. Perhaps evaluators just do not care about research on evaluation. Many evaluators “fall into” evaluation rather than joining the field intentionally. They may not have the research background to care enough about “research karma.”
  5. Inabilities to truly use Dillman’s principles. If you know anything about survey design, you know about the survey guru Don Dillman and his Tailored Design Method for survey development. Some of the methods they recommend for increasing response rates are to personalize surveys (e.g., use first and last names), use multiple forms of communication (e.g., send out a postcard as well as an email with the survey), and repeated contact (e.g., an introductory email, the main survey email, and multiple follow-ups). However, these methods are unable to be used with AEA members. The research request task force does not provide names or mailing addresses to those who request a sample of evaluators and they limit contact to members to no more than 3 notifications over no more than a 30 day period. This makes the tailored design method difficult to implement.

Our next post will discuss what can be done by evaluators and the AEA research task force to improve response rates.

This post was written in collaboration with Deven Wisner. By day, Deven manages human capital and business analytics for Global Registration Services, Inc., a legal services company in Tucson, AZ. By night, Deven is a consultant focused on data- and research-based decision making. He is a proud member of the American Evaluation Association and Board Member of the Arizona Evaluation Network.