The emergence of an artificial intelligence (AI) tool which can produce essays within seconds has sparked alarm on college campuses and prompted many Irish higher education institutions to revamp their policies on academic integrity and how they assess students.
ChatGPT, released in November by the artificial intelligence lab OpenAI, generates accurate and nuanced text in response to short prompts. Videos are circulating online with millions of views which show students using it to write assignments.
Quality and Qualifications Ireland (QQI), the watchdog for standards in Irish higher education, said many higher education institutions have already initiated full reviews of policies in relation to assessment and academic integrity.
“It is a matter for institutions to take time to explore the impact of these tools on the system and understand how they may harness new technological tools such as ChatGPT, while balancing out any potential risks to academic integrity,” a QQI spokeswoman said.
Orla Tinsley: The reality of having to fight for basic rights from all angles is exhausting
Dancing with the Stars 2025: Who are the contestants, when is it on and more
When the Nazis occupied Paris, his colleagues fled, but 84-year-old Sparrow Robertson kept filing his sports column
Joe Humphreys: Lessons in philosophy from Sally Rooney’s latest novel that can help us make sense of the world
The National Academic Integrity Network, a group of Irish academics established by QQI, met last month to discuss ways to adapt assessments in order to minimise the threat of cheating and guidance for students about the risks and ethics of these tools.
Billy Kelly, the network’s chair and former dean of teaching and learning at DCU, said he was stunned by the power of ChatGPT to produce well-written essays within seconds.
“I was in awe,” he said. “You’re getting pretty fluent answers back ... This has moved the dial on assessment.”
He said he entered an essay title on ChatGPT for a history module of the type routinely given to first-year students.
“Within five seconds it produced what was a credible first stab at the question. On the face of it, pretty good journalism with references, although it wasn’t quite an academic article,” he said.
Articles were less successful, he said, when there were very specific essay titles with less published information to draw on, while there was also a risk of facts or quotes being misrepresented.
ChatGPT – which stands for “generative pre-trained transformer” – is part of a new wave of artificial intelligence. It is the first time such a powerful tool has been made available to the general public through a free and easy-to-use web interface.
An OpenAI spokesperson has said the lab recognised the tool could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.
Academics are hopeful that detectors will soon be available to root out cheating.
Turnitin, a plagiarism detection service widely used by colleges, said recently it would incorporate more features for identifying AI, including ChatGPT, later this year.
A plenary session of the National Academic Integrity Network last month involved discussions of AI chatbots and members from Atlantic Technological University and Griffith College shared information on work they are undertaking in this space.
Mr Kelly said the mood among higher education staff at the meeting was “somewhere between alarm and concern”.
“Some see it as an existential threat to assessment, but it’s only a threat if we don’t adapt,” he said.
The new technology has already prompted some colleges to plan changes in how they assess students in the next academic year, with a greater emphasis on oral presentations, along with more specific and personalised essay titles.
While some education institutions abroad have banned the technology, many colleges are opting to prepare students for the growing use of this technology.
A QQI spokeswoman said Irish higher education institutions are free to draw up their own policies on how AI is used on campus.
“Artificial intelligence can be used as an educational tool and students will need to understand how to use AI technology legitimately. It is important that institutions clearly communicate to their students under what circumstances the use of artificial intelligence and other tools will be considered a threat to academic integrity,” she said.
The National Academic Integrity Network is meeting next month, titled: “What to do about AI text generators? Next steps for educators”. It will feature contributions from a US-based academic and a discussion on practical measures educators should consider.
She said students outsourcing their work to an AI system is just as problematic as students outsourcing their work to a person providing a contract cheating service or a relative.
“At this stage, we understand that AI systems still have some limitations – for example, they can be weak on referencing. Understanding these weaknesses may provide institutions with a way of designing their assessments to combat the risk of cheating using artificial intelligence. Artificial intelligence may also lead to the prioritisation of higher-order learning that cannot be automated,” she said.
While flaws with the current technology have been flagged, OpenAI is expected to soon release another tool, GPT-4, which it says will be better at generating text than previous versions. Google has built its own chatbot, LaMDA, and several other start-up companies are working on similar forms of generative AI tools.