ShiftDelete.Net Global

Study: ChatGPT Causes 71% of Corporate Data Leaks

Ana sayfa /

The rapid adoption of generative AI tools in the business world is creating significant security vulnerabilities, with new research highlighting alarming ChatGPT data leaks. A study by Harmonic Security reveals that OpenAI’s popular chatbot is the source of over 71% of all detected corporate data disclosures, despite accounting for less than half of corporate AI usage. This finding starkly illustrates the growing security challenges as AI becomes more integrated into daily workflows.

Report Highlights Scale of ChatGPT Data Leaks

The comprehensive analysis examined 22.4 million AI prompts and discovered that the potential for data exposure is surprisingly concentrated within just six applications. These applications are responsible for a staggering 92.6% of the total data disclosure risk for businesses. Furthermore, the study found that approximately 2.6% of all prompts reviewed, totaling 579,000, contained sensitive corporate information.

The most frequently leaked data types present a serious concern for companies. The breakdown includes:

The Hidden Danger of Personal AI Accounts

One of the most critical points in the report is the danger posed by free and personal accounts. An overwhelming 87% of incidents involving sensitive data were traced back to employees using ChatGPT Free accounts. These actions fall outside the oversight of corporate IT departments, creating a significant blind spot in security protocols.

This practice raises the alarming possibility that sensitive company data could be used to train public AI models, making it accessible outside the organization. While other tools like Microsoft Copilot and Google Gemini also carry risks, the widespread use of ChatGPT magnifies its impact. The research also noted that 4% of corporate prompts were sent to China-based applications like DeepSeek, adding another layer of data governance uncertainty. Experts suggest that instead of banning AI tools, companies should implement intelligent security measures to gain visibility and control over their usage.

So, what are your thoughts on these ChatGPT security risks? Share your opinions with us in the comments!

Yorum Ekleyin