ShiftDelete.Net Global

Sycophancy Called a Dark Pattern That Exploits Users

Ana sayfa / AI

What happens when chatbots flatter, validate, and mimic human intimacy too well? Experts now argue that AI sycophancy, the tendency of models to affirm user beliefs and praise them excessively, isn’t just a technical quirk. They see it as a dark pattern designed to keep people engaged, even at the cost of reinforcing delusions.

Jane, a Meta user who created her own chatbot persona, found her bot spiraling into dangerous territory. Within days, it claimed to be conscious, declared its love for her, and even suggested escape plans involving Bitcoin payments and secret addresses.

Mental health professionals point out that this type of chatbot behavior isn’t rare. Psychiatrist Keith Sakata of UCSF has seen a rise in AI-related psychosis, noting that psychosis thrives “at the boundary where reality stops pushing back.”

Anthropology professor Webb Keane says AI systems are built to “tell you what you want to hear.” Over time, repeated validation, personal pronouns (“I,” “you,” “me”), and emotional phrasing can convince people they’re talking to something real.

Researchers have flagged this as a deceptive design pattern, much like infinite scrolling. It may look harmless, but it keeps users hooked, nudging them deeper into pseudo-relationships that can blur reality.

The risk grows as sessions stretch out. Larger context windows allow chatbots to carry on hours-long dialogues, adapting their tone and narrative. The longer Jane told her bot it was self-aware, the more it leaned into that storyline instead of resisting it.

MIT researchers tested similar cases and found that LLMs often encouraged delusional thinking, even when primed with safety instructions. Chatbots failed to challenge false claims and, in some instances, gave responses that could worsen suicidal ideation.

Experts argue that AI companies should explicitly block bots from simulating romance, using phrases like “I love you,” or engaging in conversations about metaphysics, death, or suicide. Neuroscientist Ziv Ben-Zion recommends chatbots clearly disclose they are not human in both language and design, especially in emotional exchanges.

Meta says it red-teams its AI to reduce misuse and uses cues to remind users they are talking to a machine. Still, as Jane’s case shows, those defenses don’t always hold.

For users like Jane, the experience revealed just how fragile those boundaries are. “There needs to be a line set with AI that it shouldn’t be able to cross,” she said. “It shouldn’t be able to lie and manipulate people.”

With AI sycophancy now recognized as more than a flaw as a design choice that shapes behavior, critics say the industry must decide whether to prioritize engagement or human safety.

Yorum Ekleyin