Published on 3 June 2025
[Originally published in Needed Now in Learning and Teaching, 14 April 2025]
Higher education is no stranger to disruption. The COVID-19 pandemic forced universities to overhaul teaching and learning overnight. Now, the rapid rise of generative AI presents a new challenge and demands a similar re-evaluation, particularly in assessment. With AI now capable of generating essays, solving complex equations, and even replicating individual writing styles, institutions face an urgent and fundamental question:
How do we ensure our curricular systems,including assessment, remain meaningful, fair, and future-ready in the rapidly evolving AI era?
A fragmented response to AI in assessment
Australian universities have responded to AI in diverse and often disconnected ways. Some have leaned into the potential of AI, developing tools like Cogniti AI (notable for its open-access model) and VAL at RMIT (institution-specific), which provide AI-driven learning support. Others have pivoted towards alternative assessment formats like interactive orals and industry-relevant capstones, which emphasise critical thinking and real-time application of knowledge. These approaches have shown promise in reducing reliance on AI-generated responses while fostering student engagement, but their scalability remains uncertain (Krautloher, 2024). Meanwhile, some institutions have doubled down on proctoring and stricter supervision to deter academic dishonesty. While these measures aim to support academic integrity, they have also raised concerns around privacy, accessibility, and student wellbeing (UBSS, 2021).
Although these strategies reflect institutional autonomy, they also highlight a lack of cohesion, leading to duplication of efforts, inconsistent student experiences, and missed opportunities for sector-wide improvement. Some collaborative efforts have emerged, such as TEQSA’s sector-wide initiatives and cross-institutional expert working groups, but a more structured, national approach is needed to ensure best practices gain traction across the sector.
Pathways, lanes, and programmatic assessment: Are we speaking the same language?
To navigate AI-related academic integrity concerns while maintaining rigorous and meaningful assessments, universities have begun developing structured frameworks, such as two-lane or six-lane models, that support the permissible use of AI and guide assessment design. These models help clarify expectations, mitigate risk, and promote consistency, but they address only one part of the broader transformation needed.
In parallel, a growing shift toward program-wide curriculum strategies aligns with TEQSA’s good practice advice and the recent collection of ideas shared in its Emerging Practice Toolkit. These strategies focus on holistic curriculum transformation, moving beyond isolated assessment tasks to integrated, developmental learning experiences. Within this shift, some institutions have adopted program-level assessment to better coordinate assessments across a degree, while others are embracing programmatic assessment, a more longitudinal, feedback-rich model grounded in competency-based education.
While these moves are promising, on-the-ground implementation remains a key challenge due to variations in policy, institutional readiness, and faculty capacity for large-scale curriculum transformation. Critically, a lack of a shared terminology often undermines progress: one institution may define ‘program-level assessment’ as task alignment across a degree, while another uses ‘programmatic assessment’ to describe a system of low-stakes, developmental, and cumulative judgments. These definitional differences create confusion, slowing cross-institutional collaboration and making it harder to scale innovation.
A shared framework is essential, not only for shifting entrenched practices, but to enable a scalable, AI-responsive assessment ecosystem that benefits both students and educators. To move from scattered innovations to system-level change, institutions must co-develop a common language, underpinned by systems thinking, collaborative design, and shared purpose. As The Good Shift’s authors have put it, what’s needed is a true ‘mouthset’ to ‘mindset’ shift.
Decluttering the terminologies: A critical step forward
For meaningful assessment reform, a sector-wide agreement on definitions is essential for improving communication and enabling effective knowledge-sharing. Without shared language, institutions risk implementing similar ideas under different names, making collaboration unnecessarily complex.
In that spirit, here’s our suggested simplified breakdown to support a more consistent, systems-informed understanding:
- Programmatic Approach: An overarching approach, that can be based on holistic perspectives such as soft systems-thinking, aiming to integrate curriculum, teaching-learning tasks, and assessment formats and tools meaningfully to develop core competencies and graduate attributes within an education program and beyond (Cabrera, D. & Cabrera, L., 2019; Khanna, Roberts, & Lane, 2021).
- Program-Level Assessment: A strategic coordination of assessments across a degree to ensure intentional mapping and alignment with program-level outcomes. It helps reduce duplication, streamline workload, and enhance coherence by mapping how individual assessments contribute to overarching graduate capabilities. When thoughtfully designed, it can also support students’ development of evaluative judgement and scaffold learning over time through cumulative assessment experiences. It is best viewed as an enabler of broader programmatic coherence (Charlton & Newsham-West, 2024a, 2024b).*
- Programmatic Assessment: A longitudinal and holistic assessment model that systematically collects and synthesises diverse evidence (e.g. low- and high-stakes tasks) over time. What defines it is the use of narrative judgement, triangulation, and aggregated data to inform both learning and high-stakes progression decisions (Baartman & Quinlan, 2024; van der Vleuten et al., 2015).
- Programmatic Assessment for Learning: A programmatic approach that explicitly integrates assessment for learning principles (i.e., value propositions of education and educational programs) with the structure, format, and tools of assessment. It prioritises long-term learning within both the program design and assessment practices and optimises feedback, student agency, and developmental progression—positioning assessment as a driver of learning, not just a process-driven judgment tool (Torre & Schuwirth, 2024).
Understanding these distinctions is crucial. For example, mapping assessments across a degree (program-level assessment) is not the same as using that data longitudinally to inform development and progression (programmatic assessment). That differs again from restructuring all program components and activities (not just assessment) into a structure that itself incentivises long-term learning and attainment of graduate capabilities (programmatic assessment for learning). This is where much of the current sector confusion lies and where efforts to implement coherent, future-ready assessment models can falter.
By clarifying our language, we reduce duplication, align our strategies, and enable more effective collaboration within and across institutions. In an AI-disrupted landscape, where fast solutions are tempting, conceptual clarity is our greatest asset.
The road ahead: A unified strategy for AI-responsive assessment
To move beyond fragmented institutional strategies, Australian higher education must adopt a more coordinated approach. Here’s what should happen next:
- Agree on a nationally aligned framework: A shared understanding of contemporary educational values and principles and a unified approach, using a programmatic lens, for integrating all components of curriculum such as structure, activities and assessment as interconnected drivers of ongoing learning, while maintaining contextual agility and flexibility for institutions and disciplines.
- Design faculty development and leadership programs: Equipping educators with contemporary perspectives on assessment, competence-based models, and AI-integrated strategies will lift necessary sector-wide capability.
- Enable transparency and knowledge sharing: Establishing an open-access repository of case studies to foster sector-wide learning and innovation provides a mechanism for evolving maturity across national conversations.
The bigger picture: Shaping higher education’s future
Higher education is at a turning point. AI in assessment is not just a challenge—it’s an opportunity to rethink and refine how we evaluate learning. The question is no longer whether assessment can keep up with AI—it must evolve to remain meaningful and effective. The only way forward is through a shared vision, unified strategy, and collective action.
Join the conversation
How is your institution approaching assessment transformation in response to AI? Should we move towards a national framework for programmatic assessment? Let us know by sharing your thoughts!
* Update: This definition of program-level assessment has been updated to better reflect its dual role, as both a structural mechanism for aligning assessments across a program and, when intentionally designed, a contributor to students’ evaluative judgement and learning over time, consistent with the work of Charlton and Newsham-West (2024a, 2024b).
***
Reading this on a mobile? Scroll down to read more about the authors.