ShiftDelete.Net Global

ChatGPT conversations leaked online after public share flaw

Ana sayfa / News

Some ChatGPT conversations weren’t meant for public eyes, but thanks to a design misstep, they ended up indexed by search engines and exposed some unsettling behavior.

A Substack report from digital investigator Henk van Ess uncovered the problem last week. It started with ChatGPT’s “Share” button, a feature designed to let users send links to individual chats. The trouble? Those shared links weren’t private. Instead, they created fully public pages that search engines quietly picked up and indexed.

Anyone who stumbled across one of these links could view the entire conversation. Many did. Some even got saved to Archive.org before OpenAI pulled the plug. The result? A growing collection of chats that range from darkly hilarious to genuinely disturbing.

ElevenLabs introduces voice assistant Conversational AI 2.0!

ElevenLabs has introduced voice assistant Conversational AI 2.0. Artificial intelligence technology offers innovative features.

Some of the exposed prompts were just awkward. Others crossed into outright unethical territory. One Italian user claimed to be representing an energy company trying to displace an Amazonian tribe. They asked the bot for advice on securing the “lowest possible price” for land, noting the community didn’t “know the monetary value of land.” It was exploitation, spelled out in plain text.

Another person, working at a global think tank, used ChatGPT to explore strategies for surviving a hypothetical U.S. government collapse. That might sound like science fiction, but they treated it with cold precision, running scenarios and planning contingencies.

The most bizarre case? A lawyer who, after taking over a case mid-trial, asked ChatGPT to draft their argument before realizing they were meant to be representing the opposing party.

Here are five real prompts found in the leaked ChatGPT conversations:

Not every chat was villainous or careless. Some revealed raw vulnerability. In one leaked thread, a victim of domestic abuse walked through an escape plan with the bot, seemingly without knowing the conversation could be read by others. Another user asked for help writing political criticism of the Egyptian government, something that could land them in prison, or worse, if traced back.

These moments highlight something deeper than digital naivety: a quiet trust users place in chat interfaces. People typed like they were talking to a private, neutral assistant. But it wasn’t private, and for a while, it wasn’t safe.

This wasn’t just a privacy glitch. It peeled back the curtain on how humans use AI when no one’s watching. Some leaned on it for help. Others offloaded disturbing tasks. The difference isn’t always clear. But one thing is: ChatGPT conversations, like any data, don’t stay secret just because they feel personal.

Yorum Ekleyin