LinkedInFacebookEmail

Spotlight on Improving Teaching and Learning Through Equitable Assessment Systems

Educators in a classroom

The resources in this Spotlight center on using assessments to improve teaching and learning, with a look at training machine scoring to accurately and fairly evaluate students on performance-based tasks, current WestEd research on student response to culturally and linguistically responsive (CLR) assessment items, and briefs from WestEd’s Center for Standards, Assessment, and Accountability (CSAA) that guide educators in creating more equitable and actionable assessments.

Removing Bias From Machine-Scored Assessments: How Humans Can Teach Machines a More Equitable Approach

As computers get more skilled in creating and evaluating language, educational experts are looking to technology to solve a persistent issue: scoring evaluations accurately and efficiently, while improving on at least one significant downside of human performance. “How do we ensure the introduction of machine language increases the reliability of scores without reproducing human biases?” ask Jaylin Nesbitt, Research Associate, and Sarah Quesen, Director for the Assessment Research and Innovation team at WestEd, in their report, Machines Imitating Humans: Scoring Systems and the Risk of Bias.

Like human educators, computerized systems use a rubric to gauge student performance. That rubric can be improved by taking into account assessment item responses from a diverse set of students. “Students from different backgrounds and cultures may have different approaches to showing what they know. Diverse responses should be included as correct responses and coded into the rubric as such,” Nesbitt and Quesen write.

Next, for what Nesbitt and Quesen deem “high-stakes testing,” the machine-scoring system is trained and tested by submitting human-scored responses. “It takes hundreds or thousands of human-scored papers to train most models,” Nesbitt and Quesen write. They also note that these training and testing papers should include representations from the entire student population: “students from different races, ethnicities, incomes, multilingual learners, and students with an IEP,” with enough examples from each group to represent the students being tested.

At this point, system administrators can also take further steps to eliminate bias. For instance, a 2023 study used Natural Language Processing (NLP) techniques to create an index for AAE language and examine the relationship between a student’s language background and the score they received on an evaluation. With such a language index, machine trainers can instruct the scoring system on how to respond to culturally specific language without penalizing scores.

Training a scoring system is, as we can see, a big job. But as Nesbitt and Quesen write, “We must take these steps, no matter how many, with intention so we can truly create and uphold an approach to assessment that is inclusive of the students we serve and prioritizes and honors their ways of knowing, being, and experiencing the world.”


Culturally Responsive Mathematics Assessments: Can They Help Address Race-Based Achievement Gaps?

Faced with persistent and sizable achievement gaps between White students and students from traditionally marginalized groups, many educators have come to question the conventional wisdom that assessments should be “neutral,” which despite the name, often reflects a mainstream White sensibility. Could the inclusion of CLR text and/or images in assessment items help address inequities between student groups by allowing students to interact with content that’s familiar and comfortable?

In search of answers to this complex question, WestEd and an external research partner have spent the last 2 years conducting studies that ask for direct student feedback on assessments: what they notice, likes and dislikes, and whether CLR makes a difference for them. Though the inquiry on this topic is in its infancy, there is evidence that students appreciate assessment questions that resonate with their personal interests (along with assessments with brief, clear language and items that center graphics and images). Involving students in the design of assessment items may be a fruitful tactic for future CLR evaluations.


Creating Better Assessment Systems: The Latest Research From WestEd

WestEd’s Center for Standards, Assessment, and Accountability (CSAA) provides research-based policy guidance and resources to educators in search of equitable and actionable student assessment systems. Educators taking a closer look at assessments may find these three CSAA briefs of use:

  • Key Elements of a Coherent and Equitable Local Assessment System: State assessments can help gauge achievement across all state districts, but the results are not as useful to individual teachers who want to refine their in-classroom approach to each student. This brief can aid state and local leaders in designing “coherent and equitable assessments” that give classroom teachers the data they need.
  • Components of a Coherent and Equitable Assessment System: Assessment systems can provide a roadmap for educational systems, but only if the assessment systems provide a complete picture of student learning. Ensure that your school/district/state’s system meets this standard with this one-page infographic that summarizes how, when, and why assessments should be used.
  • 7 Recommendations for Using Education Data to Support Equitable Learning Outcomes: Annual state assessments are powerful tools, but they have their limitations. Learn how to use a wider range of data to promote more equitable learning outcomes in this brief aimed at educators, administrators, and policymakers.

More Related to this Post