Reframing Academic Integrity in an AI-World

Jean Jacoby, Director, Education Futures, Massey University

Jean Jacoby, Director, Education Futures, Massey University

Pat Yongpradit empowers people to shape the world they want, not be consumed by the one they fear. As leader of TeachAI, a global initiative with Code.org, ETS, ISTE, Khan Academy and WEF, he partners with governments and organizations worldwide to advance AI in education responsibly. A former classroom teacher, Yongpradit has driven systemic change by expanding computer science education globally.

It seems like someone releases an article every few days that triggers a flurry of commentary about the two AI’s, academic integrity and artificial intelligence. The ongoing debate about prevention versus detection is occupying a great deal of time at many universities, but this polarised approach to the issue is a distraction from a significantly more important (and fundamental) question about the nature of learning, and therefore teaching, in a technology-mediated world, and what it means for us as ‘the university’.

A growing body of research suggests Generative AI (GenAI) use is becoming increasingly difficult to detect and prevent. The rapid sophistication of these tools means that accepted indicators of AI use, such as hallucinations and fabricated references, occur less frequently, and the tools are also becoming much better at so-called AI-proof tasks, such as reflective writing.

While many of us are confident in our own ability to tell when a piece of writing is AI-generated, our confidence is probably misplaced. Human judgement is notoriously flawed at the best of times, and it is no different when it comes to detecting GenAI. An investigation to determine how accurately reviewers from journals in the field of Applied Linguistics (and who better to spot the output of Large Language Models than linguists?) could distinguish AI- from humangenerated writing found they were correct only 39% of the time. A small study with pre-service teachers yielded slightly better results, with participants in this study correctly identifying 45% of AI-generated texts, while a much larger project which asked 4,600 participants to distinguish between human and AI-generated selfpresentations found a 50-52% accuracy rate. They may as well have just tossed a coin.

Detection tools are also problematic, but for different reasons. Concerns about false positives and bias against non-native English users are generally well known. More important though is the fact that with some very basic prompting into the AI tool, entirely AI-generated text can be made un-detectable. And unsurprisingly, a whole industry has emerged around ‘humanising’ and supporting students to avoid AI detection just Google ‘humaniser AI’ to see what’s out there! The net result of this is that using detection tools is likely to catch only the novice AI user, or those who cannot afford to pay for a more cunning tool to do the work for them.

There is a lot of talk about combining multiple strategies to improve detection, or tricks to ‘trap’ students into inadvertently revealing their AI use. These discussions miss the point. Not only is the implementation of multiple detection strategies unreliable and extremely time-consuming for academic staff but concerns about false accusations also create an undue burden for both teachers and students. What is more, it ignores the fact students will be not only allowed but actually required to use these technologies after they have graduated.

Work by Professor Sara Eaton around ethics and integrity in the age of AI posits the concept of ‘post plagiarism’. I am not convinced that the label she has chosen is helpful, but she is attempting to come to grips with the fact that technology and artificial intelligence infuses almost every part of our daily lives, and it is impossible to separate these from processes such as learning and communication. She advocates for urgent conversations about what ethics and integrity (and, I would argue, learning itself) look like when technology is inseparable from the human endeavour. 

This is the conversation that universities need to be having. The issue is not about prevention or detection. It is about prevention and adaptation, an argument made much more coherently than I can here. Where there is knowledge that we believe students must be able to demonstrate without the use of AI (such as threshold concepts foundational knowledge that, once understood, changes the way students think about a topic), we need mechanisms that reliably prevent the use of AI. Invigilation is one obvious approach to securing assessment, but there are others, such as in-person activities, orals, practicals and placements.

For everything else, we need to accept that all aspects of students’ engagement with the topic will to a greater or lesser extent be mediated by technology and AI, and thus focus on helping them develop the ethical, critical, and information technology skills to be expert and effective users of AI. If we do not, the worst-case described in this paper from MIT (in short, over-reliance on AI will make you dumber) seems inevitable.

“The issue is not about prevention or detection. It is about prevention and adaptation”

Some universities are further ahead with this. The two-lane approach from the University of Sydney is widely regarded as the most pragmatic solution for the current situation, and many universities around the world are moving towards similar approaches. Our own approach, the AI Use Framework, provides a first step towards that model as it provides us with a common language to use with students, but, as one of the authors of the AIAS model on which it is based points out, it is not an assessment security instrument, and we are at risk of treating it as such. Trying to decide whether a students’ use of AI exceeds the parameters of a particular category simply pushes us back into an even more complex process of detection, and it really misses the point about the importance of working with students to develop their AI knowledge and skills.

I believe there are four critical steps for universities to take:

1. Building scaffolded AI knowledge and skills into all our qualifications.

2. Mapping where our assessments are secured across a qualification. There should be no programme where a student can graduate without having completed secured assessments at critical points that allow us to authenticate their knowledge.

3. Mark unsecured assessments on the assumption AI has always been used and focus on the quality of the assessment ‘product’ with that in mind. (Is the argument coherent, accurate, wellstructured, etc.?) Instead of looking for so-called AI indicators such as buzzwords, emotionless writing styles and superficial arguments as a detection mechanism, simply treat such issues as poor academic writing, and grade the assessment accordingly. Despite being unsecured, these assessments will still have value as they are the opportunities for students to develop the core knowledge they need to succeed in the secured assessments. Students who choose to freewheel through these using AI will do so at their own risk.

4. Treat un-paraphrased AI outputs, hallucinations and fabricated references as you would any other matter that falls along the poor-academic-practice-toacademic-misconduct continuum. The issue should not be the AI generation; rather, it is that students are submitting unchecked, inaccurate and falsified information. That is the conversation we need to be having with our students.

The pace at which these changes are occurring is challenging. A comment was made at a conference I recently attended that ‘the pace of change has never been faster and will never be this slow again’. Generative AI has been a feature of our daily interactions since 2023, but many universities are yet to address the structural adaptations of our course content and assessment that we spoke about in those early days. If we wish for university qualifications to retain their meaning and value, it is imperative that we do so.

Weekly Brief

Read Also

Building Inclusive Digital Ecosystems Where Students Thrive

Building Inclusive Digital Ecosystems Where Students Thrive

Bob Goeman, CIO, Metropolitan Community College
Cyber Preparedness: Protecting Students from Digital Dangers

Cyber Preparedness: Protecting Students from Digital Dangers

Troy Lunt, Technology Director, Data Privacy Manager, Iron County School District
Leading IT with Innovation, Empathy and Customer Focus at Embry-Riddle

Leading IT with Innovation, Empathy and Customer Focus at Embry-Riddle

Becky Vasquez, Vice President and Chief Information Officer, Embry-Riddle Aeronautical University
Redefining Student Success in the Digital Age

Redefining Student Success in the Digital Age

Doug McGee, Director of Teaching, Learning and Assessment, Idaho State University
Leading Inclusive and Smarter Learning

Leading Inclusive and Smarter Learning

Jennifer Van Wagner, Educational Technology Manager, Ohio Northern University
Driving Institutional Agility through Continuing Education Leadership

Driving Institutional Agility through Continuing Education Leadership

Joe Cassidy, Associate Vice President Economic Development, Dean Continuing Education and Public Services, College of DuPage