Do students from different populations have the same opportunity to show what they know and can do on an assessment? Will a new interim assessment qualify for state vendor lists? Is there sufficient evidence in support of a state summative assessment peer review?
We can help you answer these types of questions as your thoughtful partner in assessment evaluation. Our specific expertise in developing innovative assessments includes new item types and approaches to testing students along with validity and reliability studies.
How We Help
Our team comprises skilled psychometricians and quantitative and qualitative researchers. We have extensive experience in planning, developing, and evaluating assessment methods and systems at the local, state, and national levels.
We gather, analyze, and synthesize assessment data and conduct research to ensure scores are reliable, valid, and fair. We conduct innovative, original research to support next-generation assessments.
With facilitation, coaching, and consulting from our highly skilled staff, we help you learn how to
- Establish and maintain a thorough understanding of item performance to ensure expectations are met
- Design and pilot new or revised assessments, including innovative assessments
- Evaluate whether or not scores support their intended uses
- Ensure fairness and equity of assessments for all students
Service Delivery
- Online and onsite
Who Will Benefit
- Assessment vendors
- District administrators
- Early childhood professionals
- State educational agencies
Featured Experts
Sarah Quesen
Connecting Research With Practice
Starting from a strong background in psychometrics, statistics, and data analysis, our team of skilled psychometricians and researchers apply innovative methods to solve modern-day problems. We actively contribute to research on new approaches to designing assessments, scoring student responses, modeling data, and reporting results while adhering to the AERA, APA, and NCME Standards for Educational and Psychological Testing. Our research applications include ensuring the fairness of simulation-based items for all learners, evaluating the classification accuracy of growth models, and interpreting norm- versus criterion-referenced scores given the intended purpose of the assessments.