educationtechnologyinsights
| | SEPTEMBER - OCTOBER 20258APACAPACIN MY OPINION By Jean Jacoby, Director, Education Futures, Massey UniversityREFRAMING ACADEMIC INTEGRITY IN AN AI-WORLDIt seems like someone releases an article every few days that triggers a flurry of commentary about the two AI's, academic integrity and artificial intelligence. The ongoing debate about prevention versus detection is occupying a great deal of time at many universities, but this polarised approach to the issue is a distraction from a significantly more important (and fundamental) question about the nature of learning, and therefore teaching, in a technology-mediated world, and what it means for us as `the university'.A growing body of research suggests Generative AI (GenAI) use is becoming increasingly difficult to detect and prevent. The rapid sophistication of these tools means that accepted indicators of AI use, such as hallucinations and fabricated references, occur less frequently, and the tools are also becoming much better at so-called AI-proof tasks, such as reflective writing.While many of us are confident in our own ability to tell when a piece of writing is AI-generated, our confidence is probably misplaced. Human judgement is notoriously flawed at the best of times, and it is no different when it comes to detecting GenAI. An investigation to determine how accurately reviewers from journals in the field of Applied Linguistics (and who better to spot the output of Large Language Models than linguists?) could distinguish AI- from human-generated writing found they were correct only 39% of the time. A small study with pre-service teachers yielded slightly better results, with participants in this study correctly identifying 45% of AI-generated texts, while a much larger project which asked 4,600 participants to distinguish between human and AI-generated self-presentations found a 50-52% accuracy rate. They may as well have just tossed a coin.Detection tools are also problematic, but for different reasons. Concerns about false positives and bias against non-native English users are generally well known. More important though is the fact that with some very basic prompting into the AI tool, entirely AI-generated text can be made un-detectable. And unsurprisingly, a whole industry has emerged around `humanising' and supporting students to avoid AI detection just Google `humaniser AI' to see what's out there! The net result of this is that using detection tools is likely to catch only the novice AI user, or those who cannot afford to pay for a more cunning tool to do the work for them.There is a lot of talk about combining multiple strategies to improve detection, or tricks to `trap' students into inadvertently revealing their AI use. These discussions miss the point. Not only is the implementation of multiple detection strategies unreliable and extremely time-consuming for academic staff but concerns about false accusations also create an undue burden for both teachers and students. What is more, it ignores the fact students will be not only allowed but actually required to use these technologies after they have graduated.Work by Professor Sara Eaton around ethics and integrity in the age of AI posits the concept of `post plagiarism'. I am not convinced that the label she has chosen is helpful, but she is attempting to come to grips with the fact that technology and artificial intelligence infuses almost every part of our daily lives, and it is impossible to separate these from processes such as learning and communication. She advocates for urgent conversations about what ethics and integrity (and, I would argue, learning itself) look like The issue is not about prevention or detection. It is about prevention and adaptation
< Page 7 | Page 9 >