Skip to main content
Skip to footer

December 22, 2025

The State of Assessment in 2025

Isabelle Gonthier | Chief Assessment Officer, ETS and PSI

  • Assessment Innovation

AI, and especially generative AI, has been a part of assessment industry conversations for several years. It was full of possibilities, but has remained largely theoretical, especially in exam content development. However, this significantly changed in 2025 as our industry concretely moved from talking about AI to really using it and providing data on its usage. 

2025 was the year when assessment organizations, educators, policymakers, and solution providers all shifted from conceptual curiosity to meaningful, practical application of AI. Perhaps most importantly, it was the year when we stopped seeing AI as something that simply sits on top of existing processes. And we started recognizing that it can reshape the way we think about assessment itself.

What changed: AI became a daily tool, not a future concept

If 2024 was still about exploring ideas, 2025 was about getting hands-on. Across the sector, comfort levels increased dramatically. Generative AI became part of the day-to-day work of content creation and management, quality review, and operational support.

A turning point in content development

One of the biggest shifts we saw was in how organizations approach item creation. Until recently, most credentialing organizations preferred to test AI on low-stakes content like practice items. But in 2025, we saw something new – a growing interest and confidence in piloting AI to support operational exam content.

The change wasn’t simply about efficiency. It was about recognizing that AI can help with the most challenging parts of content creation: getting started, drafting high-quality initial ideas, reducing subject matter expert (SME) burden, and setting human reviewers up to be engaged in deeper, more meaningful work. As prompting strategies became more refined and the quality of outputs improved, comfort levels increased accordingly.

AI began to support thefull assessment lifecycle, not just writing

One of the most important developments this year was the broader application of AI across the assessment lifecycle. We saw AI used to support:

  • Item review processes and feedback loops.
  • Alignment checks.
  • Workflows, presentations, and meeting management.
  • Trend analysis and data summarization.

This shift from AI for item writing to AI across functions is what made 2025 so consequential. It marked the moment when organizations started building the structure they need to use these tools responsibly and consistently.

Governance moved from an aspiration to arequirement

With adoption rising, the need for clear governance became unavoidable. Many organizations recognized that the ad-hoc, exploratory approaches of previous years were no longer enough. 2025 brought a continued and enhanced focus on:

  • Establishing internal guidelines.
  • Defining human-in-the-loop steps.
  • Strengthening quality controls.
  • Documenting decisions and processes.
  • Ensuring transparency.

This wasn’t about slowing things down; it was about creating the stability needed to use AI in high-stakes environments. In my view, that shift in mindset was one of the defining developments of the year.

What stayed the same: The enduring fundamentals

Even as AI transformed workflows, two constants held firm.
 

  • Security is still table stakes: If anything, 2025 highlighted the need to keep test security front and center. Longstanding threats persisted, and new technology-enabled risks emerged, particularly with AI-driven content harvesting tools and increasingly sophisticated impersonation attempts. The message from the year is clear: innovation and security must evolve together. We cannot allow one to outpace the other.
  • The human factor is still irreplaceable: Another constant was the continued importance of human oversight. Even as trust in AI tools grew, the need for expert review did not lessen. Keeping a ‘human in the loop’ remained essential, not only as a safeguard but as a partner to AI. How we use AI may be evolving, but human judgment remains central to responsible assessment.

What made the biggest difference: moving beyond traditional formats

AI has opened the door to assessment approaches that were once too resource-intensive to be feasible. We now have opportunities to develop:

  • More interactive tasks.
  • More realistic simulations.
  • Immediate feedback mechanisms.
  • Dynamic scenarios that capture richer forms of evidence.

We’re not replacing traditional assessments quite yet, but we are expanding the possibilities.

Thinking of assessment as a journey, nota single moment or destination

With more data available and better ways to integrate it, we can begin to connect assessments more closely to learning, skill development, and real-world performance. This perspective encourages us to see assessment as a continuum. Something that helps learners grow, employers understand competencies, and institutions support progression, rather than simply certify achievement at one point in time.

2025 was the year of evidence

The biggest difference between last year and this year is simple: we now have data.

Real pilots. Real performance metrics. Real acceptance rates. Real quality indicators.

For the first time, organizations could evaluate AI processes with evidence rather than speculation. That shift from concept to measurable impact has changed the conversation in lasting ways.

Looking ahead: Building on the foundation

2025 marked the moment when AI concretely moved from an abstract idea to a practical tool used across the assessment lifecycle. It was the year evidence replaced speculation, governance took shape, and new opportunities emerged to think differently about what assessment can be.

 

These shifts are only the beginning. As AI enables richer formats and deeper insight, assessment will increasingly connect with learning, performance, and ongoing development. Durable skills that transfer across roles and industries will become even more central, and traditional structures such as five-year job task analysis cycles may need to evolve to keep pace with how quickly roles are changing.

 

Through all of this, one thing remains constant. Innovation does not diminish the importance of human judgment, rigor, or security. It strengthens them. The work ahead is to build on the foundation laid this year and continue shaping an assessment ecosystem that is thoughtful, flexible, and ready for the future.

{"teaserCardGridModuleHeader":"Insight Drives Progress","teaserCardGridModuleDescription":"Discover the research, stories and ideas moving education, work and human potential forward.","teaserCardGridModuleTheme":"ets-xdark","showSeparator":true,"teaserCards":[{"teaserCardTitle":"Discover AI at ETS","teaserCardDescription":"Learn about our AI vision, principles and solutions - and how we’re empowering our workforce with real-world AI skills.","teaserCardImage":"/content/dam/ets-org/brands/insights-and-perspectives/ai.png","teaserCardImageAlt":"Image 1","teaserCardLink":"/ai.html","enableGatedContent":false,"ctas":[]},{"teaserCardTitle":"Human Progress Report","teaserCardDescription":"See how ETS’s mission comes to life through people and impact. These are stories of transformation, opportunity, and progress in action.","teaserCardImage":"/content/dam/ets-org/Rebrand/Photos/insights-teaser-card-image-1.webp","teaserCardImageAlt":"Image 2","teaserCardLink":"/human-progress-report.html","enableGatedContent":false,"ctas":[]}],"ctas":[]}