Over the past several years, the Institute of Education Sciences (IES) has funded over 100 randomized control trials (RCTs) evaluating the efficacy or effectiveness of programs and curricula aimed at improving educational outcomes for students.

RCTs typically include many schools, resulting in either cluster-randomized designs or multisite designs. When RCTs are implemented well, the results help policymakers and education leaders in making evidence-based decisions about practice.

Yet research design in applied sciences involves a balancing act between adhering to methodological ideals and dealing with the practicalities involved in carrying out the research.

This article describes an IES-funded RCT designed to determine the effect of a web-based activity and testing system in a population of community colleges in California. The authors review the development of the sample selection plan and discuss how the plan impacted the resulting sample.

By providing details about the implementation experience, the article provides insights that may aid other researchers in implementing similar evaluations.

The article was written by Elizabeth Tipton at Teachers College, Columbia University, and Bryan Matlen of WestEd’s Science, Technology, Engineering, & Mathematics program, and is available online in the American Journal of Evaluation.

Abstract

Randomized control trials (RCTs) have long been considered the “gold standard” for evaluating the impacts of interventions. However, in most education RCTs, the sample of schools included is recruited based on convenience, potentially compromising a study’s ability to generalize to an intended population. An alternative approach is to recruit schools using a stratified recruitment method developed by Elizabeth Tipton. Until now, however, there has been limited information available about how to implement this approach in the field.

In this article, we concretely illustrate each step of the stratified recruitment method in an evaluation of a college-level developmental algebra intervention. We reflect on the implementation of this process and conclude with five on-the-ground lessons regarding how to best implement this recruitment method in future studies.

Read the full article, “Improved Generalizability Through Improved Recruitment: Lessons Learned From a Large-Scale Randomized Trial.”


Subscribe to the E-Bulletin for regular updates on WestEd’s research, resources, services, and career opportunities.