AI, Ethics and You (opinion)

Not a day goes by at my university without news from my inbox about AI, from pedagogy to policy. Everyone who has been at the top of the class or been on an academic committee has some advice to share. But what do they know—what do you know? To that end, there is a test at the end of this article. But before that, a little history:
The AI News Bulletin, as I think of it, started back when ChatGPT was a kid giving pablum answers and making sweet mistakes. Invariably, the professor was famous for this mindless machine that did nothing and its dreamy output was readily available.
That view changed as AI progressed dramatically, from Claude and its ilk to the new, improved Grammarly, which goes far beyond grammar and style.
Posts are then divided into two types:
- Gloom-and-doom academics are collectively wringing their hands about how this Frankenstein device will destroy education.
- The happy, self-congratulatory types who somehow managed to use AI in the classroom and couldn’t wait to tell everyone.
It’s surprising how many first-gens have never actually tried AI (just ask). And it’s disheartening how many young innovators don’t seem to realize how many of their students are relying on AI in educationally unauthorized ways. As a smart, articulate student he replied when I asked why he relied on such help, “Well, There.”
As AI becomes the epidemic and the new normal, two new voices enter the conversation:
- A new breed of old-time professor who claimed to have solved the AI issue by limiting all student work to classroom responses on paper or by seducing their students with the joy of reading and writing.
- A pseudo pundit for the bigger picture (rarely with any relevant credentials) who had a lot to say about the ethics and proper use of AI.
Most of them are deceptive.
The restrictive type will not admit that, since there are only blue books available, gone are the days of research papers and any other complex work that cannot be completed in class time.
Human happiness says that when students encounter Great Books it will lure them away from artificial learning aids and that the experience will enable them to learn amazing amounts where just yesterday they seemed unable to read 20 pages a week. And they will not admit that their examples of student confidence are cherry-picked, as recent research on the use of AI among students shows, or that they teach in special colleges with students enrolled in such small classes that even their 100 book courses are conducted as a zetetic graduate seminar. To that end, the “I discuss with each student what they wrote” solution doesn’t make sense in the way most classrooms work.
Facing AI as an inevitability is more realistic, but such discussions inevitably lead to the “moral application of AI,” to repeat what many academics-turned-policy say. Cutting through the talk: When is using AI OK and when is it bad? The differentiating point is whether you are using AI to find information or to produce results. But where is the real dividing line? What is the difference between planning suggestions from a human or AI? When does one level of assistance become the highest level? “You’re actually writing it to someone,” you might say, but what if the AI simply made suggestions, some of which you accepted and some of which you rejected? Not coincidentally, the same problems with the cloud of plagiarism, another corruption that has become much easier when you don’t have to find the source and retype the words.
If you are one of the legal writers or proscriptist scholars or ethicists weighing these issues, I invite you to take this test:
Multiple Choice: What’s the difference between
- checking the thesaurus to find the same word
- asked a friend
- typing it as an AI query
- asking a friend to look over a manuscript
- paying a freelance writer to do that
- asking Claude to make suggestions
- writing a committee report
- collaborating on the report with other committee members
- asks ChatGPT to write a report after feeding it for minutes
- Googling baby girl names for your future child
- looks up the old phone directory of names
- asking the AI for a name based on the desired characteristics of the child
- getting health advice from a doctor
- getting advice on a medical website
- getting medical advice from AI
- “Siri, make a list of restaurants near me.”
- “Siri, make a list of restaurants near me, ranked by the best reviews on Yelp.”
- “Siri, knowing what kind of food I like to eat, suggested suitable restaurants near me.”
Essay questions:
Is relying on AI the same as copying from only one other source?
Is the ethical or responsible use of AI equivalent to citing your sources?
How does an AI “write in X style” differ from a human doing such a task?
What is the difference between your summary of what happened and that of the AI? Does it matter if the AI is intuitive and you’re not?
What is the worst sin, relying on AI to bring your vision to life in a Sora video or using ChatGPT to write your story?
How much work can you take away from your work and call that work your own? Can you interact with AI? Can you cooperate for 25 percent?
Extra credit:
Is there anything people who don’t like AI can use it for?
Are there uses of AI that people are already relying on without knowing it?
Which AI is better than you?
I wish I had a good way to measure this test, not relying on right and wrong but on imagination, human consciousness, use of technology and other subjects that are always open to interpretation no matter how many articles are published on them. But if you really want to know the score, feel free to use the AI-generated rubric.



