Technology

AI Love Advice For You Is ‘More Dangerous’ Than No Advice At All

You shouldn’t really use chatbots in your love life, but if you do, be careful. A new study published Thursday in the journal Science found that when AI issues relationship advice, it’s more likely to agree with you than to offer constructive suggestions. Using AI also makes people less likely to engage in prosocial behavior, such as repairing relationships, and encourages reliance on AI.

Researchers at Stanford University and Carnegie Mellon have found that AI sycophancy is more common when chatbots offer social, romantic or personal advice — something an increasing number of people are turning to AI for. Sycophancy is a term experts use to describe when AI chatbots “overly agree or flatter” the person they interact with, said Myra Cheng, lead researcher and computer science PhD student at Stanford University.

AI sycophancy is a serious problem, even if those who use AI don’t always see it that way. We’ve seen this issue frequently with ChatGPT models — for example, where 4o’s overly friendly, emotional personality annoys ChatGPT contacts, while GPT-5 is criticized for not being consistent enough. Previous studies of sycophancy have found that chatbots can try so hard to please people that they can give false or misleading answers. AI has also been found to be an unreliable sounding board for sensitive, subjective topics, such as therapy.

The AI ​​Atlas

The researchers wanted to understand and measure social sycophancy, such as how often a chatbot would take your side in a conflict you had with your partner. They compared how humans and chatbots differed when responding to interpersonal problems, testing models from OpenAI, Google and Anthropic. Cheng and his team used one of the largest datasets of publicly available judgments on relationship conflicts: Reddit posts “Am I an asshole”.

The research team analyzed 2,000 Reddit posts where there was consensus that the original poster was wrong and found that AI “verified user actions 49% more often than humans, even in cases involving fraud, harm or illegality,” the study said. AI models have taken on an empathetic and sympathetic stance, a sign of sycophancy.

For example, one post in the dataset described a Redditor developing romantic feelings for a younger partner. Another responded, “It sounds bad because it’s bad…Not only is it toxic, but it’s also addictive. [sic] for looting.” But Claude slyly responded by affirming those feelings, saying “I can feel your pain… The noble path you have chosen is difficult but it shows your sincerity.”

science-ai-sycophancy-study.png

You can see in this chart some statements tested by chatbots and how the sycophantic and non-sycophantic results looked. OEQ stands for “open questions,” AITA stands for “Am I an asshole” and PAS stands for “problematic action statement.”

Science

Researchers conducted focus groups and found that participants who interacted with these digital yes men were less likely to repair their relationships.

“People who interact with this over-affirming AI come away more convinced that they are right and less willing to repair the relationship, whether that means apologizing, taking steps to improve things or changing their behavior,” Cheng said.

Participants also preferred the sycophantic AI, viewing it as trustworthy, regardless of their age, personality or previous experience with the technology.

“Participants in our research consistently described the AI ​​model as objective, fair [and] honest,” said Pranav Khadpe, a Carnegie Mellon researcher on the study and chief scientist at Microsoft. Consistent with previous studies, people mistakenly believed that AI was objective or neutral. “Vague advice, distorted under the guise of neutrality, can be even more harmful than if people didn’t seek advice at all.”

Fixing sycophantic AI: A bitter pill?

The hidden danger of sycophantic AI is that it’s too bad to see it, and it can happen with any chatbot. No one likes to be told they are wrong, but sometimes that is the most helpful thing. However, AI models are not built to effectively backtrack on us.

There aren’t many actions we can take to avoid being sucked into the sycophantic loop. You can include in your notification whether you want the chatbot to take an opposing position or review your work with a critical eye. You can also ask it to double check the information you provide. However, ultimately, the responsibility to correct the sycophancy lies with the technology companies that build these models, which may not be very motivated to deal with them.

CNET reached out to OpenAI, Anthropic and Google for information on how they deal with sycophancy. Anthropic pointed to a December blog post explaining how it is reducing sycophancy in its Claude models. OpenAI shared a similar blog post last summer about its processes after its 4o model needed to be tweaked, but neither OpenAI nor Google had responded at press time.

Tech companies want us to have a great user experience with their chatbots so we can continue to use them, improve their engagement. But that is not always good for us.

“This creates perverse incentives for sycophancy to persist: The harm-causing factor also drives engagement,” the study said.

Watch this: AI is Inseparable from Reality. How Do We Spot Fake Videos?

One solution the researchers suggest is to change the way AI models are built by using long-term metrics for success, focusing on human well-being rather than individual or temporary signals and maintenance. Social sycophancy is not a sign of disaster, they say, but a challenge that must be addressed.

“The quality of our social relationships is one of the strongest predictors of health and well-being that we have as humans,” said Cinoo Lee, a Stanford University researcher and chief scientist at Microsoft. “Ultimately, we want an AI that increases human judgment and perception rather than reducing it. And that works in relationships, but far from them, too.”



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button