Current Projects


Evaluation Projects

2017-18 Evaluation of After-School All-Stars, Los Angeles

This evaluation will examine program quality, attendance, academic outcomes, and positive developmental outcomes of a large, multi-site after-school program in Los Angeles, CA. Specifically, we are focusing on the reasons why youth join the program and how their motivations for staying in the program change as a result of their experience in the program. For this evaluation, I am the project manager under Dr. Tiffany Berry at the Claremont Evaluation Center with two evaluation associates.

Quasi-Experimental Evaluation of Bright Prospect

This evaluation follows entering freshmen, prior to participation in Bright Prospect, and exiting seniors in high school to examine (a) predictors of who participates in Bright Prospect and (b) how psychosocial (e.g., non-cognitive, socioemotional) skills and resources affect high school outcomes and college matriculation, persistence, and graduation. For this evaluation, I am the project manager under Dr. Nazanin Zargarpour at the Claremont Evaluation Center with two evaluation associates.

2016-17 Evaluation of MUSD’s Extended Learning Opportunities

Yearly evaluation of Montebello Unified School District’s Extended Learning Opportunities program, which comprises multiple after-school program providers. The project involves developing and monitoring a continuous quality improvement system, staff surveys, student surveys, and archival data analysis. For this evaluation, I am the project manager under Dr. Tiffany Berry at the Claremont Evaluation Center with one evaluation associate.

 

 


Research and Conference Papers

Dissertation: Evaluation as a means to bridge the research-practice gap

Many agree that there is a gap between research and practice such that research is not used by practitioners because it is hard to access and understand or because it is not relevant to their practice. In my dissertation, I will argue that evaluators are in a prime position to bridge the gap because evaluators can span the boundaries for both research and practice. I am currently in the initial stages of my dissertation, writing the review paper that will ultimately be the first chapter (literature review) of my dissertation. You can read more about some of this research here on my blog.

 

Strategies for Collecting Better Survey Data from Youth (AEA 2017)

This demonstration session will provide examples and recommendations for item-writing, response options, formatting, pre-testing, administration, and analysis for children and adolescents. While many evaluators agree that surveys need to be carefully adapted for children and adolescents to be developmentally appropriate, many are still unclear on effective practices. For instance, some may be unsure of how young is too young for a survey, how to write age-appropriate questions, whether paper or online surveys are better for youth, and what response options should be used.

Using Vignettes to Improve Staff Knowledge about Program Quality (AEA 2017)

Training program staff about what quality means and what high program quality looks like is an important first step towards improving program quality. This presentation explores how vignettes—short stories about hypothetical characters in specific circumstances—can be useful for teaching program staff how to think about program quality and, for organizations with high evaluation capacity, learn how to conduct observations prior to going out “into the field.” Through our work, we used this activity across three different groups of people (two afterschool programs and one group of budding evaluators); their usefulness for teaching them what program quality looks like and preparing them for conducting observations will be discussed. Implications of using vignettes to promote evaluative thinking in organizations and as a strategy to promote continuous quality improvement will be discussed.

Data Visualization in Evaluation (AEA 2017)

In this working group, we are conducting a series of studies to see research data visualization in evaluation. Our first study is on logic models and how narratives and data visualization principles can enhance–or inhibit–the visual efficiency, aesthetics, and credibility of logic models. The second model is in the initial stages, but we are examining the visualization of effect sizes. We are presenting the first study at Eval17.

Relationships among Non-Cognitive Factors and Academic Performance (AERA 2018)

We use structural equation modeling to test the Consortium on Chicago School Research (Farrington et al., 2012) model of how non-cognitive factors affect academic performance. We found support for the model; however, academic perseverance was not significantly related to academic performance in the context of other non-cognitive factors. Furthermore, we examine differences between freshmen and seniors to determine preliminary developmental changes across high school. The paper was written for presentation at the AERA 2018 conference.

Leveraging Attendance Data in After-School Programs (AERA 2018)

Attendance data are often not leveraged to their maximum potential. This paper proposes several analytical strategies for leveraging attendance data for program improvement efforts. Specifically, we show how attendance data can be used to answer several meaningful evaluation questions (i.e., Are enough students consistently attending? Who is attending? Does attendance improve as quality of programming improves? Does more consistent attendance improve youth outcomes?). When attendance data is analyzed and combined with other data sources, it can be used to improve the quality and impact of after-school programs.

 

Politics in Evaluation

Evaluation is a political act. Surveying AEA members, we seek to highlight some of the common political situations that emerge during different stages of the evaluation. We hope to help better understand how politics affects practice and help us recognize the different ways that these situations can be addressed in practice.

Predictors of Grit using Multilevel Modeling

Much of the research on the popular construct of grit consists of what grit predicts (e.g., academic success, persistence), but little is known about what predicts grit. Using data from YouthTruth‘s School Experience Survey, three sets of predictors (i.e., school demographics, student demographics, student experiences) are examined through multilevel modeling to see what has the strongest predictive relationship of grit.

 

Aesthetic Experiences as Flow: Relationships with Well-Being

This study examines how the aesthetic experience (i.e., the attitudes, perceptions, experiences, or acts of attention involved in viewing art) are related to Csikszentmihalyi’s (1990) conceptualization of flow. Csikszentmiahlyi & Robinson (1990) defined aesthetic experiences as having the same content of flow but differ from other flow experiences by four artistic related dimensions. Thus, we created the Aesthetic Experience Questionnaire, tested its convergent validity, explored how aesthetic experiences differ across persons, and tested whether aesthetic experiences, like flow, relate to well-being. Overall, results offer preliminary evidence that aesthetic experiences relate to well-being and there’s support that aesthetic experiences are a flow experience.

From College Access to Success: Importance of Psychosocial Competencies for Minority Students in College (AERA 2017)

This paper was presented at AERA 2017. We are currently writing it up for publication. The paper highlights the effective principles of practice that led to college success for one particular college access program.