September 5, 2024
One of the by-products of having evidence-informed policy is that it brings increased attention to the nature and quality of evidence. Alas, the foundation of understanding “what works” is built on those individual evaluation studies that test for the effects of interventions. It would make life so much easier if all the evaluation designs for the question “Did the intervention work?” all came to same answer. One could choose the cheapest and quickest method and be done with it. But that is not the case. Entire books have been written and courses developed that expound on the strengths and weaknesses of each evaluation design and how confident (or skeptical) we should be of their results.
One evaluation design that has been highly valued by many methodologists is called the Regression Discontinuity Design (RDD). It is considered by some of the evidence-based clearinghouses, such as the What Works Clearinghouse in education, to be among the strongest designs that can be implemented to understand the impact of an intervention. However, the RDD is not well understood by many people, including researchers, and it is rather underused in justice settings. But some have advocated that it should be used more often to assess interventions relevant to crime and justice. Why?
Well positioned to answer this and other questions, is the JPRC research team working on a project to promote RDD in justice settings. The team, led by Jonathan Nakamoto, and including Alexis Grant and Trent Baskerville, received internal WestEd support to learn how RDD is being used in the field and to promote it to increase awareness and use. To date, the team has coauthored a brief, a short article, and made several conference presentations on RDD.
Could you explain the basic principles of RDD and why it is considered a powerful design?
RDD is a quasi-experimental design that takes advantage of situations in which a numeric score is used as the sole assignment criterion for an intervention. Let’s say, for example, youths above a certain score on a risk instrument get the treatment; youths below do not. Here is a figure that can help show the situation best—as the graph indicates, individuals who score low on a risk instrument, say a 49, would be expected to have differences across many characteristics when compared to someone with a much higher score, say 89, but we would not expect systematic differences between those scoring a 49 and those scoring a 51. The beauty of RDD rests in its ability to credibly approximate the counterfactual—what would have happened in the absence of the intervention—by comparing those individuals who are just above the cutoff against those just below the cutoff.
A regression model is used to analyze the data in RDD. Although the participants are not randomly assigned to treatment and control groups, the assumption holds that those just above the cutoff are not systematically different from those just below the cutoff. Thus, any discontinuity in the regression line at the cutoff point (as seen in the figure) can be attributed to the effect of the intervention, not to underlying differences between the groups. Graphing the data as in the figure can be particularly helpful when using RDD, and a visible change at the cutoff is indicative of a treatment effect.
What has the JPRC project found out so far about the use of RDD designs in justice settings?
Despite their potential for rigorously evaluating the impact of interventions and policies, they have been infrequently used in justice settings, according to our team’s review of the literature. We have identified 68 published or available studies relevant to crime and justice that used RDD. However, only about 40 of these were directly relevant to law and the justice system; many were studies of education or employment policies to determine their impact on crime outcomes. Unfortunately, we do not have data on what the potential number should be, but it is our view that RDD has been used less often than possible.
Our studies indicate that RDD is most often used retrospectively to examine a policy that has already been implemented. This is one of the strengths of the design in that RDD can be done if there is access to a data file that has data specifically on the assignment variable (e.g., a person’s score on a risk instrument), their outcomes, and knowledge of the cutoff policy.
Can you provide a specific example or case study in which RDD has been successfully implemented within a justice setting?
One example in which RDD was used in a justice setting comes from Cologne, Germany, published in the Journal of Empirical Legal Studies in 2022. This was one of the clearest prospective uses of RDD. Christopher Engel and his colleagues worked with judges and probation staff in the juvenile court to develop a scorecard, based on judges’ ratings ranging from 0 to 28, to assign juvenile risk. Youths scoring 13 or higher were assigned to the intensive probation program, and those scoring 12 or lower were assigned to regular probation. The RDD analysis showed that the intensive probation program reduced recidivism by 10 percentage points at 6 months and 30 percentage points during the longer follow-up (1–3 years) for youths around the scorecard cutoff of 13 points.
What are some of the challenges or limitations you see in applying RDD in justice settings?
One of the main challenges to RDD is identifying situations in which treatment is assigned solely on the basis of an assignment variable—with a clear and precise cutoff point that is strictly enforced. In many justice situations, the assignment to treatment or control may not be as transparent and clear-cut as is required by RDD. There are usually many factors that influence whether one person receives an intervention, and another person does not. Even when RDD is implemented, there may be covert manipulation around the cutoff in which individuals who were close to the cutoff may have their scores artificially manipulated so they indeed receive the treatment. Additionally, the general findings from RDD are limited because they are valid only to observations around the cutoff. Conclusions from RDD studies may not be applicable to individuals who scored further away from the cutoff.
How is your project promoting the use of RDD in the field?
We’re using a variety of methods to increase awareness of the benefits and applicability of the method for evaluation and decision-making. We’ve put out publications [linked above] and are working on two more briefs and a peer-reviewed journal article. We are also holding a roundtable on RDD at November’s American Society of Criminology annual meeting with several of the researchers who have used this method, to learn more about its applications in the field.