By Dr James Bedford, Academic Learning Facilitator, Pro Vice-Chancellor, Education & Student Experience Portfolio
Published 25 July 2023
Arthur C. Clarke once said: “any sufficiently advanced technology is indistinguishable from magic”. It’s been 230 days since ChatGPT was released and it seems the magic has worn off a little. Yet in the wake of this, new developments are emerging. It won’t be long until Google and Microsoft 365 will be launching their generative AI-integrated word processing products. ChatGPT 4 will be essentially ‘baked’ into both programs, allowing users to generate text at the click of a button. Soon, whenever you open Microsoft Word or Google Docs, you’ll be faced with the decision: do I want to write anything myself?
I signed up to the new feature in Google’s Workspace (you can too) and have been experimenting with its capabilities for the past few weeks.
Google have chosen a wand as their magical icon, no less. It’s a miraculous button users can click to do all sorts of things.
There is the option to Refine, Formalise, Recreate, Elaborate or Rephrase selected text. In addition, there is a Custom Option where you can be more descriptive. Forgot to add in-text citations? Just enter the prompt ‘include in-text citations in Harvard referencing style’ and the wand will do the work for you.
The AI generated responses the wand provides sound a little too robotic, lacking voice, or perhaps the nuances of a human perspective. In many ways, it provides good examples of average responses. Each time I copy-pasted the AI-generated text into ZeroGPT. it detected the text was AI written almost 100% of the time. I have found this is rarely the case when checking ChatGPT 4 outputs. And while detection tools are limited in their ability to detect AI generated text, this comparison is somewhat revealing.
As generative AI capabilities start to get embedded into more of the tools we use, we’ll no doubt see a significant impact on the way people think and write. For students, it will mean that they no longer have to spend hours slaving over essays and research papers. They can simply click a button and let the AI do the work for them. This could have a number of benefits. For example, it could free up students to spend more time on other activities, such as extracurricular activities or socializing. It could also help to improve students' grades, as they will no longer have to worry about making mistakes in their writing.
I didn’t write the above paragraph. I chose the Elaborate option and let Google’s wand do the rest. (NB: the wand begins to elaborate after the italicised text). It’s also important to note, while I’m a proponent for using generative AI in some capacities, I don’t see it as a shortcut to be utilised to save time. In fact, as an academic learning facilitator who teaches students to write and research responsibly, I would say using generative AI in your studies might equate to additional work, as it often requires further fact-checking and analysis of its outputs.
That being said, I’ve seen first-hand how these tools are helping support students. When used responsibly, generative AI’s capabilities can provide useful feedback on written work, support the brainstorming of ideas, explain difficult concepts, act as a Socratic tutor, and even communicate in multiple languages. If students are taught to question the outputs of tools like ChatGPT, and assume they are incorrect until proven otherwise, there is a lot to gain from using generative AI in appropriate ways. As one of my students remarked:
If I asked a teacher too many questions, they might get annoyed, or stressed. ChatGPT doesn’t get annoyed at me. It also speaks both languages I speak fluently. But if I rely too heavily on it, I might not think for myself and become lazy – GENY0002 Student, 2023
So, let’s assume these tools are here to stay. Because they probably are. While there are many challenges and pitfalls to this technology, when used correctly, these tools can help students learn. And we have a lot to learn from the students.
To be clear, I don’t think we need to encourage the use of generative AI in every given context, and educators should have the right to decide when these tools are and aren’t useful. But from what I’ve seen, students appreciate being taught how to use generative AI responsibly. They are eager to learn more. So far, our Academic Skills sessions on generative AI have been some of our most popular. In a recent session we hosted over 75 online participants interested in developing their understanding of generative AI.
The session mainly focused on improving AI literacy, a term defined as a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace (Long and Majerko, 2020, p.2).
During the workshop I asked students about their current use of AI tools and found 55% were already using them in their studies. Interestingly enough, 50% weren’t sure if they were allowed to. While this was only a small sample size, these numbers do seem to be representative of the broader conversations I’ve been having with students. In addition, such uncertainty seems to reflect the need for further discussions around AI use, particularly as it relates to coursework and academic integrity.
For some ideas on how to start these conversations with students refer to our conversation starters with students resource (access to document for UNSW members).
As more and more products like Microsoft’s Copilot and Google’s wand become available, we will see more and more students, and indeed staff, adapting to these tools. Perhaps instead of rushing to embrace or ban this technology, or jumping to reactive conclusions, we can take a breath and start to teach students how to approach these tools critically. Perhaps, as educators, it is now our responsibility to teach students how to use their new magical wands.
Reading this on a mobile? Scroll down to learn about the author.