LinkedInFacebookShare

Implementing Smarter Balanced and PARCC Assessments: A Q&A with Andrew Latham

Andrew Latham

Aiming to improve how we measure students’ readiness for college and careers, the Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers (PARCC) have spent the last several years developing and field-testing assessments aligned to the Common Core State Standards. 2014/15 was the first year of full implementation, with 17 states administering the Smarter Balanced assessments and 11 states and the District of Columbia administering the PARCC tests.

As schools and districts are receiving their students’ 2014/15 scores — and looking ahead to 2015/16 — we sat down with WestEd’s Andrew Latham to discuss the benefits and challenges of the new assessments, how educators can interpret and communicate the results, and how WestEd is supporting states with implementation.

Latham is the Director of the Standards, Assessment, and Accountability Services (SAAS) program at WestEd and the federally funded Center on Standards and Assessment Implementation (CSAI). SAAS has played a significant role in the consortia, including acting as the project management partner for Smarter Balanced and conducting test development as a subcontractor for PARCC.

Question: What are some key differences between states’ assessments and the Smarter Balanced and PARCC assessments?

Answer: The new assessments are based on the Common Core State Standards and attempt to measure a deeper level of knowledge and thinking than many state assessments of the past. When analyzing the complexity of thinking involved in state assessments across the country, researchers found that the vast majority focused on lower levels of knowledge — recall, reproduction of content, skills, and concepts — with little measurement of more complex, cognitively demanding knowledge that involves strategic and extended thinking.

The new assessments include more questions that test those upper levels of knowledge. For example, rather than simply providing five potential synonyms for a vocabulary word, a test might incorporate a vocabulary word within the context of a paragraph and ask, “What does this word mean?” This requires interpretation of contextual clues to derive meaning, rather than simple memorization.

The new assessments focus more on open-ended responses, which can teach us a lot about students’ knowledge, though there’s still an ample role for multiple-choice questions. In the new tests there’s a greater use of multiple-select options, which allow for less guessing than traditional multiple choice. Moreover, the new tests often couple a multiple-choice question with an open-ended question.

For example, the test might present students with a graph of product sales versus marketing expenses and ask students to select which level of marketing expenses will maximize profits. The follow-up question then asks students to explain how they derived this answer. To achieve full credit, students must provide compelling mathematical analysis of the information in the graph.

Are the new assessments administered differently?

Yes — the Smarter Balanced and PARCC assessments are fully computerized, though paper-and-pencil alternatives are offered in districts that lack the necessary technological infrastructure.

Another difference is that the Smarter Balanced test is computer adaptive. In the past, most states exclusively used linear tests, which ask all students the same set of questions regardless of proficiency. So students at the low end of the skill range still get the hardest questions, even though they may not be able to answer mid-level questions correctly. That’s a less efficient — and more frustrating — way to measure students’ knowledge. Computer-adaptive testing efficiently determines a student’s level of knowledge by reducing the measurement area and asking increasingly difficult questions only if the student answers previous questions correctly.

The new assessments measure deeper thinking on more rigorous standards.

What are some of the main challenges of the new assessments?

One challenge is that testing for strategic or extended analysis of information takes much more time, effort, and money than testing for recall of facts. For example, writing an essay explaining the causes of the Civil War takes more time than answering a multiple-choice question about the Battle of Gettysburg. And open-ended questions must be scored by hand.

So the new assessments involve a significant tradeoff. We have to figure out how much testing to do to guide instruction, identify gaps that need to be addressed, and inform education policy without subtracting unduly from classroom time. A lot of good people disagree on where that line is.

How are teachers reacting to the new assessments?

The jury is still out about how well teachers will accept the new assessments. Many are frustrated by how much time is required and skeptical about how these assessments will help improve teaching and learning. Ultimately, assessments provide only one piece of evidence about students’ performance and must be viewed within the larger context of other evidence, such as students’ classroom performance and feedback from their teachers. Ideally, though, the next wave of interim assessments will serve as a tool to not only identify areas of students’ weakness but also help guide instruction in a meaningful way.

What can schools expect from the first batch of scores that are coming out from 2014/15, the first official year of implementing these assessments?

Everyone is widely expecting that fewer students will be judged as proficient on the new consortia assessments than on their old state assessments. And results from the first few states suggest this will be the case, though the drop has not been as dramatic as some have predicted. But you can make a compelling argument that such comparisons aren’t really valid and shouldn’t be made. The new assessments measure deeper thinking on more rigorous standards, and, as such, they’re establishing a new baseline for student achievement. Therefore, it will be much more informative to see how student performance changes between years one and two of the new assessments.

What advice would you offer to school and district administrators about how to communicate about the new scores?

Communicate early and often. Don’t wait until the scores are being released. Get out in front of it. Explain what the new assessments are, how they’re measuring proficiency or meeting standards in new ways, and why scores are likely to be different. Also, communicate to the full range of stakeholders — including teachers, parents, students, school boards, and policymakers.

Administrators will need to explain that we have moved to more rigorous standards. If lower numbers of students are judged proficient on these standards, that’s because we’ve shifted the target, so it’s wholly inappropriate to interpret these drops as evidence that education is getting worse or that students are learning less. We need to wait until we assess against the new standards again next year before we can compare “apples to apples.”

What has WestEd been doing to support states, districts, and educators with implementing the Common Core–aligned assessments?

One of the biggest needs in the field is gaining a better understanding of how other states are wrestling with similar implementation issues. Through the CSAI, we developed an online tool called State of the States, which provides data about each state’s progress in implementing rigorous standards and assessments. You can use the tool to search broadly to find commonalities across states, or you can focus on a particular state to find out things like which standards the state is using, how long those standards have been in place, and what the state’s current testing programs are.

Along with Stanford University, we’ve also developed teacher training modules called Building Educator Assessment Literacy (BEAL), funded by a Hewlett Foundation grant and offered mainly in California and Hawaii at this time. BEAL uses Smarter Balanced performance tasks to show teachers what the assessments look like, how they’re being scored, and how to interpret them.

We’ve also provided technical assistance to states that have chosen to continue to pursue their own state-specific standards — for example, helping them to record data to meet federal reporting requirements.

What would you like to tell administrators, educators, and parents as they prepare for the 2015/16 Smarter Balanced and PARCC assessments?

To help teachers and parents digest the new assessments, both Smarter Balanced and PARCC consortia offer detailed interpretive tools. Each website contains a wealth of information, including digital libraries and learning materials, and a rubric explaining how answers are scored and how the assessments are different from traditional ones.

How effective do you think the new standards and assessments will be?

Ultimately, the data will tell the story. In the meantime, I believe that these are rigorous standards and assessments — the best yet at determining college and career readiness. I also believe that Smarter Balanced and PARCC will advance the field, that the assessments built to measure the new standards are innovative and forward thinking.

Whether people are supporters or detractors, these changes are bringing standards and testing to the forefront of the national debate, which is timely in the run-up to the presidential election.

We have good reason to be optimistic, but assessments can always be improved. We can always figure out ways to get better validity or to be more efficient. Yes, this is a paradigm shift in testing, but we’d best not rest on our laurels.