Some ChatGPT conversations weren’t meant for public eyes, but thanks to a design misstep, they ended up indexed by search engines and exposed some unsettling behavior.
How ChatGPT conversations became public

A Substack report from digital investigator Henk van Ess uncovered the problem last week. It started with ChatGPT’s “Share” button, a feature designed to let users send links to individual chats. The trouble? Those shared links weren’t private. Instead, they created fully public pages that search engines quietly picked up and indexed.
Anyone who stumbled across one of these links could view the entire conversation. Many did. Some even got saved to Archive.org before OpenAI pulled the plug. The result? A growing collection of chats that range from darkly hilarious to genuinely disturbing.
The unethical side of ChatGPT conversations
Some of the exposed prompts were just awkward. Others crossed into outright unethical territory. One Italian user claimed to be representing an energy company trying to displace an Amazonian tribe. They asked the bot for advice on securing the “lowest possible price” for land, noting the community didn’t “know the monetary value of land.” It was exploitation, spelled out in plain text.
Another person, working at a global think tank, used ChatGPT to explore strategies for surviving a hypothetical U.S. government collapse. That might sound like science fiction, but they treated it with cold precision, running scenarios and planning contingencies.
The most bizarre case? A lawyer who, after taking over a case mid-trial, asked ChatGPT to draft their argument before realizing they were meant to be representing the opposing party.
Five unsettling chats that surfaced
Here are five real prompts found in the leaked ChatGPT conversations:
- A corporate lawyer seeking to underpay indigenous people for their land
- A think tank worker simulating the collapse of U.S. institutions
- A domestic abuse victim planning a secret escape
- An Arabic-speaking user criticizing their authoritarian government
- A confused lawyer asking for help building the wrong legal case
When the stakes are much higher
Not every chat was villainous or careless. Some revealed raw vulnerability. In one leaked thread, a victim of domestic abuse walked through an escape plan with the bot, seemingly without knowing the conversation could be read by others. Another user asked for help writing political criticism of the Egyptian government, something that could land them in prison, or worse, if traced back.
These moments highlight something deeper than digital naivety: a quiet trust users place in chat interfaces. People typed like they were talking to a private, neutral assistant. But it wasn’t private, and for a while, it wasn’t safe.
What the ChatGPT conversations reveal
This wasn’t just a privacy glitch. It peeled back the curtain on how humans use AI when no one’s watching. Some leaned on it for help. Others offloaded disturbing tasks. The difference isn’t always clear. But one thing is: ChatGPT conversations, like any data, don’t stay secret just because they feel personal.