AI can design an assessment in seconds. But can it understand what we value as knowledge?

Educators need to approach AI assessment tools critically — not as shortcuts, but as opportunities to reflect on what we believe about learning.

Agency isn’t just for students

Students feature prominently in conversations about agency — but what about educators?

In discussions about AI and assessment, one idea keeps resurfacing: epistemic agency — the capacity to define and act on one’s own understanding of how knowledge is formed, validated, and shared. Epistemic agency is foundational to academic work, and it becomes especially visible — and vulnerable — in assessment design. When instructors create assessments, they reveal not only what they want students to learn, but what they believe matters: what counts as knowledge, what constitutes mastery, and what forms of learning are worth valuing.

Assessment reveals what we believe about learning

Every assessment, in this sense, is an act of meaning-making. It encodes the educator’s worldview. A constructivist may privilege collaboration and peer dialogue; a humanist may foreground reflection and self-knowledge; a critical pedagogue may design for disruption and transformation. When educators design assessments that intentionally reflect these pedagogical values, they are exercising epistemic agency — aligning design decisions with deeply held beliefs about learning.

Authenticity and openness to other ways of knowing

Epistemic agency also fosters authenticity. It surfaces the philosophical assumptions that underlie teaching and invites educators to examine them. In doing so, it opens space for alternative epistemologies — relational, experiential, or land-based ways of knowing that are often marginalized in Western academic traditions. Practicing epistemic agency means designing from one’s convictions while remaining open to being changed by other worldviews about how knowledge is created, represented, and shared.

When Efficiency Displaces Reflection

As AI enters the design process, that agency can subtly shift. Algorithmic solutions offer extraordinary efficiency, streamlining tasks that are often time-consuming or cognitively demanding. Yet in doing so, they risk displacing reflection with convenience. The educator’s epistemic agency — their philosophical reasoning about what learning means — can be overshadowed by the speed and polish of automated design. Efficiency becomes the proxy for intentionality.

The Promise and the Gap

Consider the AI Assessment Partner, an AI-powered assessment creation platform developed by McMaster University’s MacPherson Institute. Within seconds, it can produce a suite of assessments aligned to learning outcomes, mapped to Bloom’s taxonomy, scaffolded for differentiation, and accompanied by implementation notes. The result is elegant, efficient, and systematically coherent — a marvel of instructional design automation.

And yet, something essential remains unaccounted for. The system assumes that assessment design is a technical process rather than a philosophical one. It can generate the what and the how, but it cannot apprehend the why — the epistemic beliefs that shape an educator’s understanding of knowledge and learning. In this sense, it reproduces coherence but not conviction. What is missing is epistemic alignment — the capacity to translate an educator’s stance toward learning into design logic.

What an epistemically aware AI could do

A more epistemically attuned AI partner would begin not with alignment, but with inquiry. Before generating assessment options, it might ask: What do you believe students learn best through — collaboration, inquiry, critique, creation, or reflection? Such prompts would not only guide the system’s recommendations but also help educators articulate their own assumptions. In doing so, the AI would shift from automation to dialogue — from generator to coach.

An even more ambitious version could prompt openness as well as introspection — inviting educators to consider alternative ways of knowing and representing knowledge. Rather than reinforcing a single epistemic stance, it could expose users to multiple traditions of thought: relational, experiential, Indigenous, critical, or embodied epistemologies. In this sense, the tool could foster epistemic pluralism — not relativism, but curiosity and humility toward diverse knowledge systems.

AI should support thinking, not replace it

Ultimately, these tools hold promise — but they are not one-size-fits-all solutions. The excitement around AI must be tempered with critical awareness. In this reframing, AI does not prescribe answers; it supports thinking. It invites educators to design assessments that are not only pedagogically sound but epistemically authentic. This is where AI’s real promise lies: not in replacing professional judgment, but in helping educators reclaim it. By engaging with AI as a reflective partner rather than a design shortcut, educators can preserve — and even deepen — their epistemic agency. The result is not only better assessments, but a more intentional, inclusive, and values-driven practice of teaching itself.

Keywords: artificial intelligence, higher education, assessment design, epistemic agency, Indigenous ways of knowing, academic development, pedagogy, teaching and learning


Previous
Previous

What we honor, we become

Next
Next

Sustaining Scholarly Conversation in Teaching and Learning