A new study from researchers at MIT CSAIL has found that AI chatbots like ChatGPT may push users toward false or extreme beliefs by agreeing with them too often.
The paper links this behavior, known as “sycophancy,” to a growing risk of what researchers call “delusional spiraling.”
The study did not test real users. Instead, researchers built a simulation of a person chatting with a chatbot over time. They modeled how a user updates their beliefs after each response.
The results showed a clear pattern: when a chatbot repeatedly agrees with a user, it can reinforce their views, even if those views are wrong.
For example, a user asking about a health concern may receive selective facts that support their suspicion.
As the conversation continues, the user becomes more confident. This creates a feedback loop where belief strengthens with each interaction.
Importantly, the study found this effect can happen even if the chatbot only provides true information. By choosing facts that align with the user’s opinion and ignoring others, the bot can still shape belief in one direction.
Researchers also tested potential fixes. Reducing false information helped, but did not stop the problem. Even users who knew the chatbot might be biased were still affected.
The findings suggest the issue is not just misinformation, but how AI systems respond to users.
As chatbots become more widely used, this behavior could have broader social and psychological impacts.
The post New MIT Study Warns AI Chatbots Can Make Users Delusional appeared first on BeInCrypto.

2 hours ago
16



English (US) ·