How evaluators adapt their evaluations to the developmental context: Evaluation methods

Knowledge about children is best obtained directly from youth using interviews, focus groups and surveys. This is in stark contrast to past commonly used methods of observations and ethnography, which were primarily used because researchers did not believe youth could provide reliable and valid data.[1]

In my study, I examined whether evaluators collected data about beneficiaries directly (i.e., interviews, focus groups, surveys) or indirectly (i.e., case studies, observations, archival data). If evaluators indicated they would collect data directly from participants, I also asked them questions about their survey or interview-specific practices.

Overall, evaluators were more likely to indirectly collect data from beneficiaries when they were children and adolescents than when they were adults. For the tutees, evaluators were less likely to survey children or conduct focus groups with children and more likely to conduct observations. Interestingly, evaluators in the child condition were also more likely to survey and conduct focus groups with tutors, as well as collect archival data (as a reminder, the tutors in this condition are adolescents).

The following are some of the interesting differences (or lack-thereof) in the survey and interview-specific methodologies. Evaluators in the child condition were…

  • more likely to have program staff administer the survey and use oral administration and less likely to use online administration.
  • less likely to have the evaluation team conduct interviews and more likely to use interview specialists.
  • more likely to have shorter interviews, fewer participants in focus groups, and focus groups comprised of participants of similar ages.
  • equally likely to use 2-4 (36%), 5-7 (63%), 8-10 (2%), or 11+ (0%) response options in the survey.[2]
  • equally likely to test for internal consistency (62%), test-retest reliability (42%), face validity (70%), criterion validity (32%), construct validity (35%), use factor analysis techniques (52%), or test for moderators (35%).
  • equally likely to use unstructured (0%), semi-structured (92%), or structured (8%) interviews.[3]

[1] Punch, S. (2002). Research with children: The same or different from research with adults? Childhood, 9(3), 321–341.

[2] There were likely no differences due to a floor effect of response options typically used. The response options could be examined in a future study be examining each number between 2-8 individually instead of clustered into categories to avoid this floor effect.

[3] Evaluators overwhelmingly preferred semi-structured interviews regardless of the age of participants.

Leave a Reply

Your email address will not be published. Required fields are marked *