Overconfident with ChatGPT and Generative AI – Time for our students to think again

 

Hand on computer mouse in dimly lit room

By Associate Professor Lynn Gribble, UNSW Business School

Published 21 April 2023

 

The arrival and easy access of Generative AI and Large Language Models (LLMs) held much promise, from the launch pitches of its creators. As an academic, I observed as we grappled with what it could mean for integrity and the impact it would have on student learning. My first step with anything new is to explore its capability and understand what it would mean in practice.  

Considering how AI could be used from both a student and teacher perspective was insightful. From ‘kicking off’ my thoughts to writing my ideas, perhaps I could even write a paper in a second language (to some extent).  As I explored this with other teachers and business professionals, we discussed a number of things, including the output responsibility remained with our expertise, some base knowledge was required to use AI well and the need for human ‘oversight’ at all times. Important in this was the understanding of the need for authors to take responsibility for the output and ownership of the work, as a robot cannot take responsibility for, or own any output. 

Teaching a large (>1000 students) compulsory core course in ethics, I considered the assignments and what ChatGPT (and other LLMs for example Blooms) could produce. The result was some very well written, descriptive knowledge with several inaccuracies. With refinement, the ‘prompts’ produced passable base knowledge, and with development could provide a solid credit or perhaps even a base line distinction paper.  

We know that temptation is a ‘siren’ and prohibition has never worked for anything. So, I commenced by not redesigning the assignment but by redesigning how the students would be evaluated.

The outcomes for the course assessments were made clear: students need to demonstrate deep and critical thinking, informed and supported by what they had been learning during the course (academic work). Also, we are assessing students’ ability to apply concepts to problems leading to informed analysis of contemporary situations in global contexts.  

In our course we teach biases. The first assignment output showed our students had an overconfidence bias, either with the ability of the AI (believing it had greater ability that it actually had/has) or in their ability to use the AI. While most students had engaged with the content and the problem (doing the type of work an analyst or consultant would produce), some of our students struggled. This was shown by the use of information that was out of date, incorrect, or creating work that was incoherent. It was clear the computer only gathers information, sometimes incorrectly. When we discussed this with students, they had an overconfidence in what they had produced. They had not considered if the ‘bots’ would or could ‘get it wrong’. 

Further, the students we spoke to had an overconfidence in translation tools. There are other issues too, such as Program Learning Outcomes of ‘Communication’ and as such, does earning a degree in one language signal to an employer an ability to communicate in that language? Some told us they used some form of Generative AI (LLM) to create base line information. It appears they accepted this without question, and once acted upon, that incorrect information led to further problems with their assessments.  

There are some important lessons going forward. Despite embedding and allowing our students to use Generative AI and LLMs through referencing it – we are all on a learning curve.

Such tools do not eliminate the need to understand the course materials or concepts. Fact checking, coherence checking, and sense making are core skills in most roles of the future, and a machine cannot do this for you.  

If English is not your first language and you use a translator (DeepL, Oulu, Baidu etc) it may provide you something that is incorrect, rude, or just plain wrong! The ‘bots’ do not have cultural humility or sensitivity. As educators, we have a responsibility to ensure our students can use Generative AI ethically and appropriately. Many publishers are already stating its use can be allowed for editing but does not reduce the need for human oversight (Elsevier). The Committee on Publication Ethics (COPE) also clearly outlines the need for authors to be responsible for their content. For students, this is more nuanced in practice. It is about more than just understanding the tools or using them. There is a need to use these tools with integrity and knowledge. Our role as academics is to adapt and to ensure our students can use the newer technologies in productive (and ethical) ways in their futures at work and beyond.  

Universities are here to develop citizens who can contribute to society in manners that question intelligently what happens and why, and through such analysis consider what can be done. Robots cannot do this.

A robot cannot have the empathy or insight or consider a chain of logic beyond how it is programmed to connect the information at hand. It does not fact check, nor does it eliminate out of date or even discriminatory information. It cannot contextualise information in an integrative manner. However, in a globalised world, the lure of Generative AI is strong. Our role then is to ensure our students can use AI ethically and with integrity recognising its limitations. It is to help our students recognise their overconfidence bias with AI. While AI offers much from the many sources it draws from, it is more like a young child providing binary information than a graduate who can join complex, apparently unlinked concepts and ideas to innovatively address the wicked problems of a VUCA world.

***

Reading this on a mobile? Scroll down to learn about the author.

 

Enjoyed this article? Share it with your network!

 

Comments