Teaching students to value education rather than marks – competency and mastery grading at UNSW

By Prof. Liz Angstmann (UNSW Science), A/Prof. Helen Gibbon (UNSW Law & Justice) and Dr Ben Phipps (PVC Education)

The authors are all UNSW Nexus Fellows - learn more about the program here

Published 17 May 2024

Image of students smiling and laughing in a classroom UNSW

Ever wondered if there is another way to grade and give feedback? Tired of your current approach to assessment and students’ fixation on marks rather than on learning?

Join us! The Competency and Mastery Models Working Group is looking for courses to pilot competency and mastery grading from Term 3 2024. The Working Group is a part of the UNSW Assessment and Feedback Student Experience (‘SX’) project.

Competency and mastery grading are alternative methods of grading that focus on whether students are competent in (to an acceptable standard) or have mastered (to a comprehensive standard) specific skills and knowledge in a course often without the use of numerical grading.

Through the strategic use of scaffolded, purpose-oriented hurdle tasks, students demonstrate a specified level of competency or mastery of the learning outcomes needed to pass the course. Criteria for competency or mastery are clearly defined, often in the form of detailed rubrics or guidelines, and feedback is designed to help students improve and progress (rather than simply to justify the mark). Students may be provided with more than one opportunity to demonstrate the desired level of competency or mastery.

Numerous benefits for students and teachers have been identified with the use of competency and mastery grading (Townsley & Schmid, 2020; Blum 2020).

Some of these benefits include:
  • Raised academic standards as students need to demonstrate a higher level of competency or mastery of the learning outcomes. (This is different to traditional grading, where students only need 50 per cent to pass the course and can achieve that mark from just part of the course material.) 

  • Higher quality student participation, engagement and learning due to a focus on intrinsic motivations and autonomous learning behaviours (Cook et al. 2014; Harrison et al., 2013; Ryan & Deci, 2000). 

  • Encouraging a growth mindset in students where they learn from their mistakes (Richardson et al. 2021). 

  • Encouraging creative risk-taking as students have the opportunity to recover from mistakes (Anguís, 2012). 

  • Reduced student anxieties since they’re not competing for marks or being ranked against each other (Bliss & Sandomierski, 2021). 

  • Reduced teacher workload since less time is spent justifying the minutiae between individual student marks, responding to requests for small mark adjustments, re-marks, and review of results (Harland et al., 2015).

As part of the pilot, information will be collected about whether, and to what extent, these benefits (as well as others) translate to the UNSW context.

By taking part in this pilot, you will benefit from:
  • end-to-end support to implement changes to grading in your course;

  • a platform to showcase your course and innovation;

  • an opportunity to provide meaningful feedback on university assessment policy changes.

If you would like to find out more about competency and mastery grading or are interested in taking part in this pilot, please complete this form by 31 July 2024.

The Working Group is also interested to hear from colleagues who have sound reasons for why competency and mastery grading is not achievable or appropriate in their course. If this applies to you, we would be grateful if you could share your insights with us via the form linked above, so we can take this into account when making recommendations to the University about this approach to assessment. 

If you’re keen to explore the opportunities for competency or mastery grading in your course, there are various grading models for you to consider, with flexibility in design options:

Satisfactory (‘SY’) grading with hurdles:

This is the ‘gold standard’ of competency and mastery grading. A well-designed assessment regime may include up to four tasks, set up as ‘hurdles’. Students are required to demonstrate a specific level of competency or mastery to ‘pass’ the assessment task (e.g., a credit or distinction standard under traditional points-based grading). Students may be provided with more than one opportunity to pass the task, although the nature of the assessment as well as resource and time constraints must be considered. Students are required to pass all tasks to be successful in the course and receive an SY grade. This model is especially suited to foundation and first-year courses, or courses where there is greater emphasis on ensuring students demonstrate competency of specific skills and knowledge rather than expertise or excellence, or performance relative to other students.

Competency ‘with merit’ grading:

Similarly to SY grading with hurdles, competency ‘with merit’ grading provides further options for grading beyond SY/FL: CM (Competent with Merit), CO (Competent) or CN (Not Yet Competent). A well-designed assessment regime might include two or three hurdle tasks that students are required to pass to attain a CO grade, with an additional more challenging final task undertaken to determine whether students reach the level of CM for the course overall. This model is suitable for courses where competency or mastery grading is appropriate but where some comparison of student performance is desired.

Numeric grading with competency hurdles:

This model is a hybrid of traditional grading and a competency and mastery approach. There is a focus on students demonstrating both competency and mastery, while a numerical mark provides a basis for comparison among students. A well-designed assessment regime might include two or three hurdle tasks that students must demonstrate competency in before they are eligible to undertake a final assessment task (e.g., an end-of-term examination where they demonstrate a higher level of mastery). For example, students are awarded 50 marks for passing the competency tasks while the mastery assessment determines the remaining 50 per cent of the grade. Ideally, students are provided with multiple opportunities to demonstrate they have acquired competency in the earlier tasks. This grading method is especially suitable for courses undertaken in later stages of a program and where a numerical grade is required.

Traditional grading without a numerical mark:

This model retains traditional grades (FL, PS, CR, DN, HD) but without an outward-facing numerical mark. A nominal value is associated with the grade and is included in the calculation of the WAM (e.g., DN grade has a nominal value of 80) but is not included on the academic transcript. This model is good for courses/programs seeking to reduce student focus on points-based marking but requiring students be graded in a way that provides a basis for comparison and communicates a student’s level of achievement (e.g., to external stakeholders, such as the discipline’s profession). This model enjoys some of the benefits of the competency and mastery models as it reduces student anxiety concerning small differences in marks, which is especially useful for disciplines where there is subjectivity in assigning a mark with single digit precision (e.g., where there is a consensus between markers that the work is of a CR standard but variance as to the specific number between 65-74 to be associated with the grade).

We look forward to connecting with colleagues who are interested in this space.

To share your thoughts, please complete the form by 31 July 2024.

 

***

Reading this on a mobile? Scroll down to learn about the authors.


References 
  • Anguís, D. (2012). Development and assessment of generic competences in engineering degrees through creativity. Journal of Technology and Science Education, 2(1), 22-30. doi: https://doi.org/10.3926/jotse.34

  • Bliss, J., & Sandomierski, D. (2021). Learning without Grade Anxiety: Lessons from the Pass/Fail Experiment in North American JD Programs. Ohio NUL Rev., 48, 555.

  • Blum, S. D. (Ed.). (2020). Ungrading : why rating students undermines learning (and what to do instead). West Virginia University Press.

  • Cook, D. A., Zendejas, B., Hamstra, S. J., Hatala, R., & Brydges, R. (2014). What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Advances in Health Sciences Education, 19, 233–250.

  • Harland, T., McLean, A., Wass, R., Miller, E., & Sim, K. N. (2015). An assessment arms race and its fallout: High-stakes grading and the case for slow scholarship. Assessment & Evaluation in Higher Education, 40(4), 528–541. https://doi.org/10.1080/02602938.2014.931927

  • Harrison, S. D., Lebler, D., Carey, G., Hitchcock, M., & O’Bryan, J. (2013). Making music or gaining grades? Assessment practices in tertiary music ensembles. British Journal of Music Education, 30(1), 27–42. https://doi.org/10.1017/S0265051712000253

  • Richardson, D., Kinnear, B., Hauer, K. E., Turner, T. L., Warm, E. J., Hall, A. K., Ross, S., Thoma, B., & Van Melle, E. (2021). Growth mindset in competency-based medical education. Medical Teacher, 43(7), 751–757. https://doi.org/10.1080/0142159X.2021.1928036

  • Ryan, R. M., & Deci, E. L. (2000). Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. Contemporary Educational Psychology, 25(1), 54–67. https://doi.org/10.1006/ceps.1999.1020

  • Townsley, M., & Schmid, D. (2020). Alternative grading practices: An entry point for faculty in competency-based education. The Journal of Competency-Based Education, 5(3), e01219. https://doi.org/10.1002/cbe2.1219 

Prof. Liz Angstmann, A/Prof. Helen Gibbon & Dr Ben Phipps are all UNSW Nexus Fellows.  

  • Learn about the #UNSWNexus program from the Nexus Director here.
     
  • UNSW colleagues can also visit the internal info page here (SharePoint).

 

This article is part of the Scientia Education Academy (SEA) blog series. Learn more about the SEA below.

Scientia Education Academy Blog Series

The UNSW Scientia Education Academy (SEA) recognises our most outstanding educators for their leadership and contributions to enriching education, and gives them a platform to showcase and facilitate excellence in teaching at UNSW and beyond. Learn more about UNSW Scientia Education Academy here.

See also

Who benefits from grading first year?. Written by Professor Liz Angstmann.

Programmatic Assessment: Are we there yet?. Written by Associate Professor Priya Khanna and Professor Gary Velan.

 

Enjoyed this article? Share it with your network!

 

Comments