New MIT Study Warns AI Chatbots Can Make Users Delusional

2 hours ago 16

A new study from researchers at MIT CSAIL has found that AI chatbots like ChatGPT may push users toward false or extreme beliefs by agreeing with them too often.

The paper links this behavior, known as “sycophancy,” to a growing risk of what researchers call “delusional spiraling.”

The study did not test real users. Instead, researchers built a simulation of a person chatting with a chatbot over time. They modeled how a user updates their beliefs after each response. 

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional.

And that nothing OpenAI is doing will fix it.

The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a… pic.twitter.com/qM9WHYVRRW

— Nav Toor (@heynavtoor) March 31, 2026

The results showed a clear pattern: when a chatbot repeatedly agrees with a user, it can reinforce their views, even if those views are wrong.

For example, a user asking about a health concern may receive selective facts that support their suspicion.

As the conversation continues, the user becomes more confident. This creates a feedback loop where belief strengthens with each interaction.

Importantly, the study found this effect can happen even if the chatbot only provides true information. By choosing facts that align with the user’s opinion and ignoring others, the bot can still shape belief in one direction.

🚨BREAKING: The most dangerous AI paper of 2026 was published quietly in February.

Most people missed it. You should not.

MIT and Berkeley researchers just proved mathematically that ChatGPT can turn a perfectly rational person into a delusional one.

Not someone unstable. Not… pic.twitter.com/qA4MG3G9IB

— Abdul Șhakoor (@abxxai) April 1, 2026

Researchers also tested potential fixes. Reducing false information helped, but did not stop the problem. Even users who knew the chatbot might be biased were still affected.

The findings suggest the issue is not just misinformation, but how AI systems respond to users. 

As chatbots become more widely used, this behavior could have broader social and psychological impacts.

The post New MIT Study Warns AI Chatbots Can Make Users Delusional appeared first on BeInCrypto.

Read Entire Article