The Best Defense Against AI Cheating (opinion)

If you work in faculty development, you may have heard similar concerns in last year’s loop: All my students cheat using AI. At Georgia State University, our campus teaching and learning center receives more requests for workshops on how to prevent digital infidelity than any other topic. Throughout the fall semester of 2025, I am holding one workshop, presentation or meeting about AI and academic integrity every four working days.
University administrators are concerned, and it shows in their response. We’ve all read the stories about professors turning to green books or choosing to retire early to avoid the deluge of machine-generated text. As we struggle with how to promote academic integrity when AI makes dishonesty so easy, higher education has largely retreated to two defenses: surveillance or advocacy.
Surveillance strategy depends on detection, an arms race we have already won. AI detection tools are biased, easily circumvented and prone to false positives. To test this, I fed the first chapter of my book (written in 2006) with a popular AI detector. Flagged my work as 39% AI generated. We can’t figure our way out of this when our radar is broken.
What I call a persuasive strategy, is actually trying to convince students to be responsible users of AI. I see universities creating syllabus statements and online modules on AI literacy, hoping that if we explain the ethics clearly enough, students will agree. But this misses the point entirely. Students usually do not cheat because they lack morale; they cheat because they use an incentive system that prioritizes performance over learning.
I believe that too often we create lessons that punish the very thing that learning is meant to do: making mistakes. When we reach high-stake curves, we give less feedback and want to be perfect on the first try, we show that the product is more important than the process. By removing the space for safe assessment and feedback, we have made the struggle to learn a legal responsibility. In that context, students turn to AI not to avoid learning, but to avoid the risk of failure in a system that does not provide them with a safety net.
Last fall, I had lunch with a colleague who told me he was giving up online teaching altogether. He would have loved to teach online during the pandemic but felt that the pervasive use of AI had made him unable to connect with students and create an authentic experience. He was particularly upset that students were using AI to write discussion post assignments that asked for personal examples. “I ask them to share an example from their lives, and they still give me something written by an AI,” he said, clearly frustrated.
I asked about the share structure. It was a “post an answer to this question and then comment on two peers’ post” format. That is not a discussion; talking digitally in an empty room. In this situation, I don’t think that students cheat because they are unethical or careless about their studies. They cheat because they are bored. They come from experiences that lack meaningful feedback, real collaboration or clear learning objectives.
I concluded that the question of how to curb AI-enabled dishonesty in our classrooms has less to do with AI or reliability and more to do with our classrooms. The ease with which students can cheat using AI has revealed an uncomfortable truth: we need to do a better job of teaching. We don’t need to proofread AI in every single job or stop teaching large online classes altogether. We need to change the way we design and teach our classrooms so that the hard work of learning, not cheating, becomes the more attractive option. Here are three ways I think we can do this:
- Make conversations real conversations. Let’s ditch the “post once, reply twice” formula. It has become the busy work of the digital age. Instead, use online forums for real interaction: peer review, debate over used examples or collaborative problem solving. If online work doesn’t require human back-and-forth, it probably doesn’t need to happen on a forum.
- Use pedagogies that promote honesty. In a recent op-ed The New York Timespsychologist Angela Duckworth argued that narrative power is a false narrative. People who successfully eat healthy or cut down on social media use aren’t doing it on purpose; they do it by organizing their environment so that the right choice is an easy choice. We can use this in our teaching. By scaffolding projects, incorporating process-based feedback and using art-based grading whenever possible, we make doing the work more rewarding, and easier, than trying to create information by faking it.
- Teach slowly, even when the class is large. Human interaction fights cheating, and positive social pressure is a powerful motivator to do the right thing. This is easy in a small meeting, but what about a large lecture class? The key is to find ways to make students feel seen and heard. At Duke University, Professor Mohamed Noor flipped through his big lectures, breaking the class into small groups to solve problems during distribution. On my campus at Georgia State University, five colleagues who co-teach a major enrollment course created integrated vertical project teams. These small groups provide a way for students to apply course knowledge to solve problems they care about while developing meaningful relationships with their peers and instructors. If students feel that their contributions are valuable, they are less likely to ask the chatbot to consider them.
When we argue about whether to use AI tools or how to punish bad behavior related to AI, I think we are dancing around the real issue. We should take this opportunity to look closely at the way we teach. Changing the way we’re used to presenting content or assessing learning can seem difficult, so don’t try to do it alone. Ask a trusted colleague to look at what you are teaching and give you honest feedback on where your tasks or assignments are not supporting your learning goals. If your campus has a teaching and learning center, schedule an appointment with the coordinator. Speaking as an institute director, I can promise you that if you bring us an assignment where you see many misuses of AI, we will have suggestions on how to improve it.
Many educators see AI as a threat, both to student learning and to academic integrity. They worry that the classroom is becoming a battlefield for AI principles rather than a space for discovery. But the answer is not better surveillance. Instead, we need to focus on creating learning experiences that encourage students to want to do their own work. The best defense for an AI chatbot is not a detector or a syllabus statement; it’s a class you should take.



