People in the Loop and Education Don’t Really Meet

Whenever I hear the phrase “human in the loop” as something desirable or even better when referring to AI and education, I think of Homer Simpson.
As fans of The Simpsons y’know, Homer Simpson is an idiot and an expert at the Springfield nuclear power plant. He is literally a human in the plant safety loop, meant to monitor automated processes.
In one classic episode, Homer spills jelly from a donut on a thermometer meant to indicate an impending meltdown, blurring the readings and allowing the levels to reach crisis point before an alarm forces Homer to act. Unfortunately, because he’s an idiot who hasn’t been paying attention to his training, he doesn’t know which button to push. Fortunately, the cycle of eeny, meeny, miny, moe moves to make the selected area on the right button. Homer becomes a hero in the town to prevent an outbreak.
The need for people in the loop when automated systems do most of the work is obvious. When automation breaks down, we need human judgment to fix things. The challenge for people in the loop is to make sure they understand the loop (Homer’s failure) and to maintain enough attention over the automatic loop to detect when intervention is necessary (and Homer’s failure).
Automated piloting in airplanes is an obvious example of a human-in-the-loop system that seems to be working. In this case, human pilots are practically trained to stay alert in these systems, and the systems are designed to require active input before changing something like heading or altitude.
But there are other human-in-the-loop systems where the human can be trained to be vigilant and where the use of automation over time cools the human into indifference because the automation seems to work so well—until suddenly it doesn’t.
The latest article on The Atlantic by Raffi Krikorian, former head of self-driving cars at Uber, illustrates this point. Kirkorian says, “My Tesla was driving fine – until it crashed.”
While driving his son to a Boy Scouts meeting on a route he had traveled “hundreds of times,” Krikorian suddenly felt the effects of the crash—air bag ejected, windshield washed out—but thankfully, everyone in the car was unharmed. He used the self-driving mode as a “practice” with no problem, until the car was totaled. He notes that self-driving cars travel millions of miles between accidents, but “that’s the problem.”
We’re asking people to target systems designed to make surveillance feel pointless. A machine that always fails to maintain sharpness. A fully functional machine does not require supervision. But a machine that works almost perfectly? That’s where the danger lies.”
I’ve been thinking lately that a lot of the talk about “people in class” in education is maybe, just maybe, not a thing. It’s a way to escape urgent and necessary conversations about the nature of automation and human responses within automated systems while keeping a fig leaf of concern for the people who work in those systems.
In an example that is close to my personal expertise, I look at the automatic grading of student writing, where a person is kept as a way to “check” the results of the automated AI. In theory, this preserves human agency and judgment over the process, but does it?
The way an LLM responds to a piece of text and issues a grade or comment is very different from what a person does when reading a piece of text, even when those judgments may be similar in their results.
Does this matter? I think so. I think it means that we are not talking about a system with a person in the loop, but a system with two different loops that intersect at times. Unlike self-driving or self-driving cars, automation and a human do not travel the same roads to their destination.
The way to close the gap between the human and the automated loop is to limit the acceptable results as much as possible. We don’t want our self-driving cars to just decide we have to drive across the country when we’re trying to get to the grocery store.
But education doesn’t—or at least shouldn’t—work this way. There should always be some element of self-determination in our work for both students again teacher. Indeed, the system before the arrival of productive AI depends on this idea, especially in writing instructions, as we have been asked to depend on rubrics and other measurable results.
But attempts at moderation bring out the most important forms of experience and struggle. The best favor I have ever done for you mine students had to throw away my complicated rubrics. I was trying to get them on track so they could drive to the right place (distance), but in doing so I was denying them the very things they needed to develop as writers and thinkers—freedom of choice.
I think it’s possible that AI automation will prove useful in helping college faculty do their jobs more effectively, but I think this help is likely to be in areas where we can allow automation to work … automatically. When we believe that people should be in the loop, I think that a deeper reflection of what we are trying to achieve will reveal that people there is a loop, or that maybe reading isn’t a loop at all, but rather many loops—and curlicues and other bits that aren’t entirely measurable but add up to something meaningful.
Introducing automaticity to student-generated products before they have developed the judgment necessary to evaluate or practice vigilance seems to me to be a constant slide into disempowerment and incoherence.
I hear claims that we need to get students working with AI to prepare them for the future, but how sure are we that we’re not replacing the Homer Simpsons generation?



