I am really looking forward to meeting you all at the annual AEA conference, Eval17! I wanted to share with you the details of my various presentations and hope you can make it to any of the ones that pique your interest! Continue reading “Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!”
Overall, my study found that evaluators are less likely to be participatory—both in the overall evaluation process and in data collection methods—when the program beneficiaries are children than when they are adults. Why is this the case? Continue reading “Why aren’t evaluators adapting their evaluations to the developmental context?”
Knowledge about children is best obtained directly from youth using interviews, focus groups and surveys. This is in stark contrast to past commonly used methods of observations and ethnography, which were primarily used because researchers did not believe youth could provide reliable and valid data.
In my study, I examined whether evaluators collected data about beneficiaries directly (i.e., interviews, focus groups, surveys) or indirectly (i.e., case studies, observations, archival data). If evaluators indicated they would collect data directly from participants, I also asked them questions about their survey or interview-specific practices. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation methods”
What evaluation design is best? This debate has raged through the field of evaluation on what constitutes credible evidence with some arguing for RCTs as the “gold standard” and others questioning the superiority of the RCT.
This debate is somewhat meaningless when we understand that the evaluation design is chosen based on the evaluation questions. Evaluations seeking outcomes or impact are perhaps best served by an experimental (i.e., RCT) or quasi-experimental design whereas evaluations seeking the needs of the program and fidelity of implementation are better served by a descriptive (e.g., case study, observational) or correlation (e.g., cohort study, cross-sectional study) design. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation design”
As mentioned previously, developmentally appropriate evaluation requires a culturally appropriate evaluation in the context of youth programs. This means including youth, or at minimum those with knowledge and experience working with youth, in the evaluation.
In my study, I asked practicing evaluators to describe the levels of involvement there would be across a wide range of stakeholders including school administrators, teachers, parents, program staff, program designers, district personnel, funders, developmental consultants, math consultants, and the tutors and tutees of the program. In particular, I was interested in the levels of involvement of the consultants, the tutors, and the tutees across the evaluators randomly assigned to the child, adolescent, or adult conditions. Continue reading “How evaluators adapt their evaluations to the developmental context: Evaluation approach”
Children and adults differ more than simply age; rather, they differ in culture as well.1 This recognition can be hard for evaluators: as we have all passed through childhood, it is easy to believe we have the same or greater knowledge of children’s culture than they do. Furthermore, our “spatial proximity to children may lead us to believe that we are closer to them than we really are—only different in that (adults claim) children are still growing up (‘developing’) and are often wrong (‘lack understanding’).”2 Continue reading “Developmental Appropriateness as Cultural Competence in Evaluation”
In a paper presented at the 2009 annual meeting of the American Evaluation Association1, Tiffany Berry, Susan Menkes, and Katherine Bono discussed how evaluators could improve their practice through the developmental context. They argue that evaluators have spent years discussing how the program context (e.g., age of program, accessibility, size of program, timeline, political nature) and evaluation context (e.g., stakeholder involvement, method proclivity, measurement tools, purpose, use of results) affect the practice of evaluation. However, there has been little discussion on how the participants of a program, and particularly the age of participants, also affect the practice of evaluation.2 Thus, they describe what they call the Developmental Context and the three core developmental facets that define the development context. Continue reading “What is the developmental context? and why is it important to evaluation?”