Education

Higher Ed Shouldn’t Let AI Write Its Own Conflict

In the editor,

Ray Schroeder’s “What Are We Teaching Now?” (April 1, 2026) asks an urgent question. But the column gives the most authority to the programs and companies that need the most scrutiny. It moves from OpenAI’s GDPval benchmark to Gemini-produced interpretations of “reality until 2026” before returning to Gemini for recommendations about what colleges should teach. Moving from a vendor benchmark to an AI-generated analysis and academic prescription is not a trivial question. Essential outsourcing: allowing the systems being processed to account for their needs.

The problem isn’t that the column takes AI seriously; Higher education should take AI seriously. The problem is that it errs on the side of machine-made prescriptions for human judgment and hastening the future. When Gemini was asked both to describe the current situation and to describe the curriculum, the column did more than report on the rise of AI; it allows technology to argue about its importance.

The column also cites statistics produced by Gemini about the restructuring of corporate jobs and the unemployment rate without identifying the sources or mechanisms behind them. Numerical clarity gives such claims borrowed authority. At the point where the argument most calls for criticism of the source and the obvious method of doing things, readers are asked to accept machine-generated statistics as if they were conclusive evidence.

Even in benchmark terms, GDPval is smaller than the column allows. As Schroeder notes, OpenAI presents GDPval as a measure of meaningful economic activity, while acknowledging its limitations and future iterations. A benchmark can inform the debate; it cannot decide which institutions owe students, which work should remain human or which losses are acceptable in the name of efficiency—especially when the column notes that the harms of automation will not be disproportionately received.

The task of higher education is not just to produce “AI-proof” graduates who can program tools and verify results. It must also ask difficult human questions: What must remain human? What kinds of judgment, care, interpretation and trust should not be developed? Teaching is not just about delivering content. Writing is not just about producing text. Counseling is not a route. A librarian’s job is not to retrieve. These are not side additions; they are the processes by which students learn to judge, be accountable and responsible for others.

Colleges should teach students to investigate AI, validate its claims and understand its limitations. But they should also reserve the right to limit or refuse the use of AI when human judgment is part of the work itself: advising, feedback on student writing, consulting and research and other types of academic interactions. The question is not only what we teach now but also what we automatically choose not to use. Higher education should not allow AI to dictate its inevitability.

Witt Salley is an adjunct faculty member at the University System of Maryland, a librarian at Montgomery County Public Libraries and former chief online learning officer.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button