THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
David Swisher, Director of Online Services, Kingswood UniversityDavid Swisher is Director of Online Services and adjunct faculty at Kingswood University. He has 24 years’ of experience leading educational technology and instructional design in higher education and serves as a Faculty Mentor for the AAC&U Institute on AI, Pedagogy, & the Curriculum. As Senior Learning Experience Designer previously at Indiana Wesleyan University, he pioneered generative AI exploration, training, and implementation. Swisher holds a doctorate in Semiotics & Future Studies (George Fox University), and a Master’s in Instructional Design & Technology (Emporia State University), and recently completed a Professional Certificate in AI & Machine Learning (Purdue University).
For many educators right now, generative AI feels like an unbeatable opponent. Students have instant access to a tool that can write essays, solve problems, and produce convincing work on almost any topic in seconds for free. Detection tools deliver unreliable results, and policies are often nonexistent or unenforceable. And the students who genuinely do their own work can seem invisible next to the ones who don’t. If you’ve felt outmaneuvered, outpaced, or unsure where to even start, that frustration is completely understandable.
But the most important question right now isn’t how to stop students from using AI, but why AI can complete our assignments so easily in the first place. That’s a harder question, but it’s the one that leads to useful solutions.
The Better Question to Ask
Over the last twenty years, I’ve worked at the intersection of educational technology and instructional design, and I've seen this pattern repeat with every major technology: adoption and integration challenges are nearly always a design problem.
Generative AI is no different. If it can produce a response that meets your assignment criteria without understanding a single word it generated, then the assessment strategy we’re using is not measuring what we think it is. That gap between what we intend to measure and what we actually measure is where the real conversation needs to happen. And closing it is something every educator has the power to do.
Why Students Cheat
Research on student motivation consistently shows that cheating is not driven simply by access to tools like AI. Students take shortcuts when they fail to see value in the assignment or do not understand why the learning process matters.
When tasks feel disconnected from their real lives, future goals, or practical application, effort declines. The same happens when the workload feels disproportionate to the perceived reward. In those conditions, shortcuts become tempting.
“If AI can complete the assignment without understanding it, the problem is design.”
AI did not create this dynamic; it merely reduced the effort required to act on it. When the stakes are high and motivation is low, the likelihood of academic dishonesty increases. One practical solution is to incorporate low-stakes, formative assessments that build competence and confidence before students must demonstrate mastery.
Cheating itself is not new. Educators across generations have blamed emerging technologies. Even in ancient Phoenicia, students traced impressions on clay tablets rather than writing independently. The core issue has never been technology, but how learning experiences are designed.
Policy Responses That Make Things Worse
Faced with rampant AI cheating, many faculty and institutions have defaulted to one of two responses: detection and punishment, or banning technology and retreating to older methods like timed in-class writing and blue books they presume are AI-proof. The impulse behind both is understandable, but they make things worse.
AI detectors are notoriously unreliable, and a growing body of recent research consistently confirms that. False positives (flagging genuinely original student work as AI-generated) happen with troubling frequency. The exceptional student who writes in a clear, confident, well-organized style with advanced vocabulary is at significant risk of being accused of cheating, with catastrophic consequences to their grade, confidence, mental health, education, and career prospects. AI detectors are simply not reliable enough to jeopardize a student’s future.
Banning AI, disabling Internet access, or reverting to blue books and timed in-class writing doesn’t eliminate cheating either. Resourceful students still find ways. What these approaches reliably do is disadvantage neurodivergent learners, ESL students, those who suffer from test anxiety, and anyone who depends on assistive technology to demonstrate what they know.
Both responses treat the symptom, yet neither touches the actual problem.
What Learning Science Tells Us
For decades, learning science has distinguished between retention and transfer. Retention is the ability to recall information on demand, such as answering a quiz, writing an essay, or repeating key facts. Transfer learning, however, is the ability to apply knowledge to a new or unfamiliar context. That is where authentic learning occurs. Simply repeating information is not the same as understanding it; it is only memory recall. And that is precisely what generative AI does well.
AI can produce fluent, convincing responses, but it cannot demonstrate true ownership of learning. It has no lived experience, personal stakes, or contextual awareness. It mimics patterns rather than mastering ideas. So, when assessments require students to personalize, adapt, and apply concepts to their own circumstances, AI becomes far less useful as a shortcut.
Generative AI cannot reflect on how a concept connects to a student’s specific environment, nor can it authentically document their process of thinking, struggling, revising, and growing. Learning science consistently shows that iteration, reflection, and context-driven application build durable understanding.
AI did not create an assessment crisis; it revealed an existing weakness. The real issue has always been whether assessments measure meaningful learning or merely reward recall.
Moving forward, here are some assessment types that are harder to outsource to generative AI because they require context and ownership:
• Process artifacts: drafts, checkpoints, decision logs and revisions with rationale.
• Contextual applications: Local data or examples, personal observations and experiences, or location-specific scenarios.
• Reflection prompts with evidence: “What did you try first? What failed? What did you change, and why?”
• Brief oral defense: to accompany submission.
When we design for genuine transfer learning like this, AI stops being a threat, and it instead becomes a tool for critical thinking and improvement. Wouldn’t that be a great outcome?
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

However, if you would like to share the information in this article, you may use the link below:
www.educationtechnologyinsightsapac.com/cxoinsights/david-swisher-nid-3701.html