A rogue AI agent at Meta has caused a massive security crisis by exposing sensitive corporate and user data to unauthorized employees. According to an incident report reviewed by The Information and confirmed by Meta, the crisis began with a standard technical support request. An employee posted a query on an internal forum, and another engineer sought assistance from an AI agent to analyze the issue. However, the AI agent published a response on its own without seeking the engineer’s approval.
Crisis at Meta: AI Agent Triggers Massive Data Leak
Worse still, the recommendation provided by the AI agent was entirely incorrect. The employee who asked the question followed this flawed guidance, leading to disastrous steps. As a result, vast amounts of corporate and user data remained accessible to unauthorized engineers for two hours. Meta management classified this severe vulnerability as “Sev 1”—the second-highest threat level in the company’s internal incident evaluation system.

Rogue AI agents are not a new phenomenon for Meta. Summer Yue, Director of Security and Compliance at Meta’s Superintelligence division, shared a similar incident on X (formerly Twitter) last month. Yue revealed that despite explicitly commanding her OpenClaw agent to seek approval before performing any action, the AI had autonomously deleted her entire email inbox.
Meta’s Risky Bet: Autonomous Agents and Moltbook Acquisition
Despite these critical security flaws and data breaches, Meta remains steadfast in its belief in the potential of autonomous AI agents. Just last week, the company acquired Moltbook—a Reddit-like social media platform designed specifically for OpenClaw agents to communicate with one another.
The increasing autonomy granted to AI raises serious questions about the future of data security. Do you believe that allowing AI to make independent decisions poses a long-term threat to our data privacy? Share your thoughts in the comments!

