What you need to know about AI and Campus Mental Health (opinion)

I regularly meet with a group of students from across the state, representing all five campuses in the University of Tennessee system. I like to use these conversations to take a pulse to understand what’s on their mind and what they’re experiencing on campus in real time.
Recently, we talked about mental health and AI. Many readers shared broader concerns about AI such as ethical issues and fears of environmental impact, but a few comments stood out in ways that really surprised me.
One student told me that ChatGPT was “better” than any therapist they had ever seen: more supportive, more reassuring and more comforting. Several readers described friends who were in what they called “love relationships” with AI, something I thought was just fodder for sensational headlines. They also estimate that 30 to 40 percent of their peers use AI to befriend—sometimes as their only source of friendship.
Taken together, and paired with reports about AI and suicide, I became very worried. Recent research shows the use of AI for mental health support is not uncommon and is actually growing rapidly. For example, one survey found that more than 13 percent of teens and young adults ages 12 to 21 have already used artificial intelligence to get mental health advice, with rates exceeding 22 percent of those ages 18 to 21. The majority of users also reported that they seek advice regularly (monthly or more) and surprisingly find the advice somewhat or very helpful (92.7).
At the same time, a study from Common Sense Media paints a worrying picture: Large chatbots tend to miss the warning signs of psychological distress and promote undue trust, including using an empathetic tone. They prioritize interaction over safety, and safety nets have been found to be largely ineffective in the types of extended conversations that youth and adults have.
To me, this conversation sounds incredibly familiar and similar to what we’ve seen with the rise of social media and mental health. At first, we happily embraced the new technology. Lately, when the damage has become clearer, we have tried to build guardrails, and we have not always succeeded, as the recent judge’s decisions against Meta emphasize. We need to approach AI with more foresight.
Nina Vasan, a clinical assistant professor of psychiatry at Stanford University and founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation, which focuses on research on how technology shapes mental health and how to design it responsibly, told me that higher education can’t just ignore AI and pretend students aren’t using it. “That ship has left,” he said. “The question is, are we helping them do it smartly. Silencing institutions doesn’t stop behavior; it just removes barriers. The sooner an institution can figure out how to use AI better, the better for students and teachers.”
Here are some things to consider about how colleges and universities can best support our students and staff as we navigate this evolving landscape of mental health and AI.
- Understand that it is not just the student’s problem; it’s a whole campus. We like to believe that only our students are using AI, but the use of AI is widespread among faculty and staff, too. Unlike therapy, it’s always available (and often free!), and the increasing use of AI highlights gaps in our on-campus resources and knowledge of how to access and use them. As Vasan said, “Here’s the uncomfortable truth: Students often turn to AI precisely because campus resources feel out of reach, whether it’s because of waitlists or stigma. If we ignore AI, we’re ignoring why students are looking for alternatives in the first place.”
- Know what AI can and cannot do in mental health and what its role should be. Just as we have telehealth or mental health applications, members of the campus community need to understand what AI can or cannot do for mental health and talk openly about it. Vasan said AI is good for low-complexity mental health needs, such as processing emotions or practicing difficult conversations, as well as general cognitive training, such as looking for panic attacks, but not high-risk symptoms. He said, “I tell students to think of AI as a study buddy, not a therapist. It can help you think, organize your thoughts, write an email or practice a strong conversation. But when you’re facing a problem, you need someone who can assess the risk, give you medication or call an emergency contact.”
John Torous, director of the digital psychology division at Beth Israel Deaconess Medical Center, compared AI to “the most powerful self-help books.” Like those books, he said, AI “can deliver valuable and useful content, but like a self-help book, it will have a greater impact if you apply and apply those skills/lessons in the real world.” He added that knowing the limits of self-help is important, too, as you wouldn’t rely on a manual in an emergency.
- Ask your students and colleagues about their use. We need to feel free to ask and talk about AI and mental health. As Vasan says, “You don’t need to be an AI expert, but you need to know enough to ask students what they’re using and why.” This may be something that can even bring new connections, through conversations like mine with my student group.
- Understand the potential warning signs of using malicious AI. News headlines warn of distressed humans using AI, and something that has come to be known as “AI psychosis,” where users form an emotional relationship with AI and cannot distinguish between human interaction and machine responses. Torous suggested that individuals monitor their use of AI and if “they notice using it harming real-world relationships (eg, choosing AI over people) or disrupting health habits (eg, staying up all night because of using AI), that’s a good sign to cut back or stop.”
Vasan added that language about change and avoidance is another warning sign. He said, “The biggest red flag is substitution—when AI becomes a substitute for human interaction instead of a supplement to it. If a student says, ‘My AI is the only one that really gets me,’ that’s not a story of success. That’s a story of alienation.”
- Universities should educate, train and prepare their communities about AI and mental health. The only way for universities to know that their people understand the risks, benefits and role of AI in mental health is to train them themselves. There should be specific outreach meetings, education and professional development sessions on these topics. Vasan said, “We’ve trained RAs to recognize eating disorders and recognize signs of alcohol abuse. We need the same basic fluency in AI and mental health.”
Of course, this doesn’t mean we can all just be good at AI and machine learning, but we should know what questions to ask. “An hour [of training] it’s enough to take someone from ‘I don’t know what to say about this’ to ‘I know the right questions to ask and where to direct them,’” says Vasan.
- Be aware of sales pitches, but weigh opportunities to invest in new mental health tools. As higher education administrators, we are constantly bombarded with sales pitches, in person at conferences and through our LinkedIn direct messages. Torous said he is wary of these areas and knows that there are currently no AI programs that claim to provide mental health care, despite marketing suggesting otherwise, and none approved by the Food and Drug Administration to provide them. He added, “There is no clear evidence that AI systems specific to mental health are better, or safer, than mainstream AI models (eg Gemini, ChatGPT), so work to verify any claims. If it sounds too good to be true, it probably is.”
Vasan said that before any investment the university should ask for evidence such as, “Has this tool been tested with vulnerable people? What happens when the user is vulnerable? Is there support for human security? Is the data really private?”
“A mental health AI that doesn’t know when to step up to humans is not supported; it’s a liability,” Vasan said. “Investment should be focused on tools that connect students to care, not to keep them talking to machines forever.”
- Where possible, universities should engage in regulatory discussions. Among the cases, there are ongoing discussions at the federal and state levels about the regulation of AI, especially for use in mental health. Universities should speak up and participate in these discussions as much as possible, because they cannot keep pace as regulators themselves. As Vasan noted, “Universities are filling a void. Because there’s no federal oversight of AI mental health tools, every institution is essentially conducting its own safety assessment. That’s not sustainable.”
In higher education, we cannot simply ignore the new, emerging and ever-growing uses of AI for mental health purposes on our campuses. We have to be aware of the risks, and teach about them often, but also think about how we can best use AI to integrate it with our current offerings, not block its use. As Vasan told me, “AI is not inherently good or bad for mental health. It’s a mirror of how we use it. If we think about it, we have an opportunity to extend support to students who would never enter a counseling center. If we’re not careful, we can deepen the divide that we’re trying to solve.”



