AI Case Study - Critiquing an AI output to strengthen communication skills


Abstract image of lightbulb, promotional image for AI in Education at UNSW

Published 25 September 2023

AI case study: Critiquing an AI output to strengthen communication skills


The academic & her views on GenAI

Associate Professor Jenny Richmond believes that students should finish their degree with the skills required to share their expertise with people from all walks of life. Students in psychology need to learn how to apply psychological thinking to complex problems and to communicate about research outcomes in a range of settings. She believes that authentic assessment is key to students developing psychological literacy.

As educators, our goal should be to prepare our students to operate effectively in the "real world." Whether we like it or not, the real world now contains technology that is able to generate text that approximates what a human can write. We need to teach students how to interact with that technology, to leverage it when it is useful to their learning, and to be able to recognise when it may lead them astray.

Assessment overview

Course PSYC2061 – Social and Developmental Psychology
Assessment type From a draft/peer review assessment to a 500-word media release and produced into a video.
Goal Develop students‘ ability to communicate about research in a style that was accessible and engaging to people who do not have a background in psychological science.

Driving change factor

The emergence of ChatGPT made it particularly appealing to students to assist with written work and explanations. Associate Professor Jenny Richmond decided to leverage AI technology, allowing students to use it to complete a first draft of a written response. Subsequently, students critically analysed the accuracy of the generative AI output to learn about the strengths and weaknesses of using AI. The new approach enabled the same learning process around evaluating a piece of writing, while also highlighting to students the strengths and weaknesses of generative AI.

    Assessment example


    Phase 1. The students ask ChatGPT to write a 500-word media release based on a recently published piece of psychological science and write a critique of the AI output.

    Cartoon icon of female using technology

    Here's an example of a prompt students could use to generate a response for their media release:

    “Write a 500-word media release suitable to appear in the Sydney Morning Herald about a piece of research titled "Testing the Identity Disruption Model among Adolescents: Pathways Connecting Adverse Childhood Experiences to Body Dissatisfaction" published in Journal of Youth and Adolescence by Vartanian, Nicholls and Fardouly.” 

    Here's a section of the generated text this prompt produced:

    "Testing the Identity Disruption Model among Adolescents: Pathways Connecting Adverse Childhood Experiences to Body Dissatisfaction. 

    Sydney, Australia – A new study published in the Journal of Youth and Adolescence reveals that adverse childhood experiences can lead to body dissatisfaction in adolescents. The study, conducted by researchers Vartanian, Nicholls, and Fardouly, is titled "Testing the Identity Disruption Model among Adolescents: Pathways Connecting Adverse Childhood Experiences to Body Dissatisfaction." 

    The researchers analyzed data from 425 Australian adolescents aged 12-18 years old, and found that those who had experienced adverse childhood experiences, such as abuse or neglect, were more likely to experience body dissatisfaction. The study suggests that this relationship is mediated by identity disruption, which occurs when an individual's sense of self is threatened or disrupted.

    Cartoon icon of laptop

    Dr. Eric Robinson, the lead author of the study, explains, "When children experience adverse childhood experiences, it can disrupt their sense of self and make it difficult for them to develop a positive body image. This can lead to negative health outcomes, such as depression, anxiety, and disordered eating." 

    The study also found that girls were more likely than boys to experience body dissatisfaction, which is consistent with previous research on the topic."


    Students then evaluate the generated text against the assessment marking rubric and provide a critique (sample below):

    Strengths Weaknesses


    The draft (albeit vaguely) acknowledges the future directions of this area of research, noting that further developments could assist in designing interventions to prevent body dissatisfaction based on the pathways identified in the current study. The draft also identifies that these findings have real-world implications for multiple relevant groups, including parents and educators. These two components ensure that the article is adequately discussing the significance of the research, communicating to a wide potential audience why these findings are important to discuss. Presenting this information also makes the article engaging, contributing to the style of the article.


    The AI draft vastly oversimplifies the results, and therefore conclusions, of the original research article. Two of the key components of the original research included the assessment of internalisation of societal standards of appearance, and the degree to which individuals compare their appearance to others – both of which are not mentioned in the draft. The draft also misrepresents and overstates the findings about gender differences, leading to incorrect conclusions that focus too heavily on females. To that media reporting of scientific research maintains integrity, it is important that key details of the research results are correct.


    Phase 2. They write a second and third draft without any assistance from ChatGPT

    Feedback from a tutor is provided on each draft. Students reflect on three priorities/changes that they were to make in their second draft. They used their own learning logs to reflect on the feedback given on their work and their own learning approach.


    Phase 3. They record a video abstract

    Students integrate feedback from their tutor, design supporting graphics and record a video abstract pitched to a lay audience.


    Student use of GenAI

    Students are allowed to use ChatGPT in their first draft where they learn to understand the inaccuracies with a generated response. They then produce further iterations without the use of Generative AI and work on the feedback they receive from their tutor for each draft.

    Cartoon icon of three people talking, sharing ideas and using technology

    Strategies that minimised Generative AI responses

    • Setting up expectations on the use of Generative AI at the beginning of the assessment and the emphasis on the quality of work.
    • A marked component where students evaluate a ChatGPT response against the assessment marking rubric to identify the weaknesses and inaccuracies of the response. This allows students to ‘fact check’ every detail the AI produced (including the strengths and weaknesses of the GPT response); accuracy of the information, use of data, sources of information, any bias and ChatGPTs ability to write like a science journalist.
    • Students needed to write a second and third draft without any assistance from ChatGPT. Feedback from a tutor was provided on each draft for opportunities for students to act on feedback. Students had their own learning logs to reflect on any feedback that was given about their work and their own learning. approach. This further supported the goals and narrative of this assessment: for the development of communication skills and genuine metacognitive learning to occur.


    What did students learn?

    • Learnt many skills from critical analysis to video editing.
    • The content felt relevant to their careers.
    • Benefited greatly from the progressive feedback, which contributed to the enhancement of the final product.
    • Found that ChatGPT was able to produce better written work than they could.
    • Easily tricked by the perceptual fluency of the generated output and unable to identify the information inaccuracies.
    • Felt the revision process was difficult because the ChatGPT output was well-constructed and grammatically correct, meaning that it was not immediately obvious to students how they could improve the text.
    • An additional compare-and-contrast scaffold was provided to support students in critically analysing the generated AI response, so they can more easily identify the difference between a generated output and one that has been written accurately.


    Student views about the assessment

    "…Wow. So many skills learned in 10 weeks. Think about the potential for the future…UNSW psych students are going to journalists that stand out!"

    "This course also had the best assignments that I have done so far. The content felt so relevant to careers in psychology as knowing how to translate complicated information is crucial."

    "The phases of the assignment were also so helpful as each assessment build upon the other so it felt a lot more manageable and didn't pile up at the end of the term."

    "Working with ChatGPT was also interesting and illuminating." 


    Enjoyed this article? Share it with your network!