close_game
close_game

Tailoring university assessments in the age of ChatGPT

ByJulian Koplin, Robert Sparrow, Nicola Rivers, Joshua Hatherley
Jul 18, 2024 01:19 PM IST

This article is authored by Julian Koplin, Robert Sparrow, Nicola Rivers, and Joshua Hatherley.

OpenAI last year released ChatGPT-4, the latest iteration of its powerful Artificial Intelligence (AI) text generator. The tool can generate convincingly human-like responses to almost any question users put to it. It can write limericks, tell jokes, and plot a novel. It can draft a convincing response to almost any question a high school teacher or university lecturer might ask students to write about.

ChatGPT logo is seen in this illustration taken, March 11, 2024. REUTERS/Dado Ruvic/Illustration/File Photo(REUTERS)
ChatGPT logo is seen in this illustration taken, March 11, 2024. REUTERS/Dado Ruvic/Illustration/File Photo(REUTERS)

Previous iterations would often generate text riddled with strange and obvious mistakes. The responses that ChatGPT generates are capable of passing exams across many disciplines. It’s tempting to think we’ll always be able to distinguish between the work of AI and the work of humans, particularly when it comes to distinctly human tasks such as creative writing, careful reasoning, and drawing novel connections between different kinds of information.

Unfortunately, this optimism is misguided. AI-generated prose and poetry can be beautiful. And with some clever prompting, AI tools can generate passable argumentative essays in philosophy and bioethics. This raises a serious worry for universities that students will be able to pass assessments without writing a single word themselves – or necessarily understanding the material they’re supposed to be tested on. This isn’t just a worry about the future; students have already begun submitting AI-generated work.

Some institutions treat the use of AI text generators as cheating. Many schools and universities have banned the use of ChatGPT, but such bans will be hard to enforce. Compared to traditional forms of plagiarism, student use of AI-generated text is hard to detect – and harder still to prove, in part because new ChatGPT generates new responses each time a user inputs the same prompt.
For its part, OpenAI is developing tools to detect AI-assisted cheating – though such tools are prone to making mistakes, and can at present be circumvented by asking ChatGPT to write in a style that its detector is unlikely to catch.
Generative AI tools such as ChatGPT are poised to make far-reaching changes to how we approach writing tasks. Among other things, they’ll make some tedious and difficult parts of the writing process easier. Sam Altman, the CEO of OpenAI, has compared the release of ChatGPT to the advent of the calculator. Calculators brought about enormous benefits; ChatGPT will, Altman claims, do the same. Schools have adapted to calculators by changing how math is tested and taught; we now need to do the same for ChatGPT. Rather than comforting us, the parallel with calculators should alert us to the magnitude of the task we face.

We see two main threats posed by tools like ChatGPT. The first is that they’ll produce content that’s superficially plausible but entirely incorrect. AI outputs can thus leave us with a deeply mistaken picture of the world.

Contrary to appearances, ChatGPT is not trying (but, often, failing) to assert facts about the world. Instead, it is (successfully) performing a different task – that of generating superficially plausible or convincing responses to a prompt.

The second worry is that reliance on these tools will result in the erosion of important skills. Essay writing, for example, is valuable in part because the act of writing can help us think through difficult concepts and generate new ideas.

In these early stages of the introduction of generative AI, educators may feel overwhelmed by the rapidly changing technological environment, but students are also coming along for the ride with us.

We suggest four approaches. ChatGPT can be a useful tool. It can, for instance, help generate ideas and get words on the page. The worries about misinformation are serious. But these are best addressed by teaching students how to use these tools, how to understand their limitations, and how to fact check their output. Fortunately, the core skills cultivated by a good education provide a strong foundation for this project. Teaching students how to read critically, how to evaluate or corroborate evidence, and how to distinguish good arguments from bad, are things universities should be doing already.
One approach might be to develop specific assessment tasks where students generate, analyse, and criticise AI outputs. While such tasks might have some role to play, we would caution against placing generative AI at the centre of education.

We should remind ourselves that for most, choosing to participate in higher education comes from a genuine interest in a subject. This fact may go some way towards mitigating the temptation to outsource their studies to AI, particularly when the value of completing this work is clear to students. By designing assessments that are relevant to students’ future careers, and clarifying the purpose of tasks about their development, we can encourage learners to engage with assessment in the way we intended.
Assessment that engages with, and leverages, students' interests could motivate learners to remain engaged such that they don’t see value in outsourcing the pursuit of their knowledge to AI.
A key worry about AI text generation is that students won’t understand what they appear to, given the work they’ve submitted. This concern can be met by balancing written work with other kinds of assessments. In particular, in-person oral presentations cannot be taken over by any algorithm, and so may be an ideal option (provided, of course, that any increase in workload for teaching staff is supported by the institution).
Supplementing traditional essays with other assessments need not come at the expense of good assessment design. On the contrary, there are good educational reasons to vary written work with these other kinds of assessment; oral communication skills are enormously valuable across a range of professions.
Another strategy involves designing assignments where students are either required to demonstrate their own understanding. This strategy may have a role to play, but it would come at a cost. We’re amid a shift away from pen-and-paper examinations to authentic assessments – that is, assessments that evaluate skills students will employ in real-world settings.
Few workplaces require their employees to write detailed discussions of difficult questions by hand, in isolation, and without the ubiquitous modern conveniences of an internet connection and a word processor. An alternative is to combine written essays with the presentation and discussion of this work during class time, potentially modeled after the format of a viva presentation or thesis defence (albeit made gentler and shorter according to the cohort being taught.)
In our own experiments, we found that ChatGPT can generate convincing responses about major works in our respective disciplines. However, it fares very poorly when asked about the cutting edge of scholarly debate, since the corpus of work it was trained on contains much less discussion of this work. When asked to reference its claims, it’s prone to hallucinate sources that don’t exist.
Dystopian visions in which AI teachers set tasks that students then farm out to AI look all too plausible. The immediate challenge for educators is to determine what an AI-literate skill set looks like, and how to evaluate whether students have these skills, especially when many of us are new to these skills ourselves.

The deeper challenge posed by the ‘threat’ of AI is to imagine what education would look like should the tools available to us relieve us of the need to exercise these crucial skills.

This article is authored by Julian Koplin, lecturer, Robert Sparrow, professor of philosophy, Nicola Rivers, senior teaching fellow and Joshua Hatherley, PhD candidate, Monash University, Australia.

See more
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, September 25, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On