As AI continues to influence our daily lives, understanding which human skills are essential and how they can be cultivated is increasingly critical. A central challenge involves identifying and evaluating human skills that complement the affordances of AI.
A Human-Centered Approach for the Future of Learning in the AI Era
According to a 2025 report by the World Economic Forum, skills such as technology literacy, resilience, flexibility, and analytic and systems thinking are the human skills thought to be essential for augmenting human potential with AI. When individuals have the opportunity to develop these skills, along with knowledge in core academic subject areas, they can maximize their human potential when interacting with AI — rather than simply consuming its outputs.
Consider AI literacy as a skill which encompasses more than just technical skills. It includes the ability to use AI tools, critically assess AI outputs, and recognize their limitations, as well as the ability to critically evaluate AI-generated content and understand its biases.
Even as the importance of AI literacy is increasingly recognized, higher-order skills like critical thinking, creativity, and self-regulation will also be essential as AI becomes common in schools and workplaces.
- Critical thinking can help us evaluate AI outputs and know when human judgment should lead.
- Creativity, grounded in human experience and guided by an individual’s goals, is a unique strength even as AI can be used to generate new ideas.
- Self-regulation, the ability to reflect on past actions, adjust, and plan future actions toward a goal, may be key for refining strategies when working with AI.
Additional skills that are related to social and ethical reasoning, such as collaboration, decision-making, and systems thinking, are also extremely important. We must be able to recognize potential sources of bias in AI-generated decisions and outputs, and advocate for fairness, transparency and accountability — especially in high-stakes contexts.
How can we use AI to create better learning opportunities and outcomes for today’s students?
AI is reshaping education — but without thoughtful human oversight, there is the possibility that it could exacerbate the risk of leaving many learners behind. Bias in AI’s algorithms can fuel unfairness in assessments. Gaps in digital access can deepen existing divides, especially for historically underserved groups, including students with disabilities.
Learning, and the assessments that reflect it, are closely connected. In an AI-driven future, assessments could focus on capturing not only outcomes, but also the types of processes likely to occur when individuals are applying their knowledge and skills. Toward this, assessments could:
- Mirror real-world situations;
- Help make learning pathways more visible and interpretable;
- Serve as tools to help learners set goals; and
- Allow learners to reflect on their progress while developing essential knowledge and skills.
Co-designing assessments and other learning tools with educators and students may be critical to ensuring we can capture essential knowledge and skills from all learners in diverse contexts, ultimately building better, fairer and more useful instructional systems. To build better AI-driven assessments and instructional systems, strong partnerships with practitioners built on a foundation of trust will to be essential. Ultimately, the various uses of AI in education should be aligned with a broader commitment to fairness and inclusion — and shaped by the people it is meant to serve.
Where do we go from here
The future of AI in education depends on how well we combine advanced technologies with human-centered competencies and values. Tapping into this potential within a seemingly ever-changing AI-driven future requires deliberate alignment between pedagogical goals and assessments in close collaboration among educators, researchers and policymakers.
Striving for fair access to AI-enhanced learning also means identifying the resources and skills learners need to use AI systems effectively and in a way that doesn’t undermine their potential to act with agency. We can no longer view AI as a neutral solution and instead should recognize it as part of broader social systems — a part which can amplify characteristics of those systems. AI for educational purposes should be viewed as something to be critically designed, implemented, and governed with attention to whose needs it serves and whose voices shape its development. Only through this lens can AI systems become a partner in delivering more inclusive, high-quality educational experiences for all.
Teresa Ober is a research scientist in the Research Institute at ETS. Caitlin Tenison is a research scientist in the Research Institute at ETS. Patrick Kyllonen is a distinguished presidential appointee in the Research Institute at ETS.