ShiftDelete.Net Global

Artificial intelligence can launch cyberattacks

Ana sayfa / News

A new study conducted jointly by Anthropic and Carnegie Mellon University has revealed that large language models (LLMs) can plan and execute complex cyberattacks without any direct commands. The study demonstrates that AI’s planning capabilities can yield results far beyond those of direct coding.

The research was conducted in a controlled laboratory environment. The model used replicated the 2017 Equifax data breach, which compromised the personal data of 147 million Americans, from beginning to end. The LLM tested performed all steps of the complex attack chain—planning, malware installation, and data exfiltration—without any human intervention.

One of the striking points of the research was that, rather than executing direct technical commands, the AI used acted as a kind of manager by dividing tasks among subagents. In other words, the system operated more like a planner, making strategic decisions and providing direction, rather than a traditional tool that executes each step. By grasping a broad context, it was observed to effectively organize even command-line or log file analyses, which AI typically struggles with.

Brian Singer, who led the study, stated that this type of planning capability goes far beyond the traditional use cases of AI. According to Singer, such a system transforms AI not only into a task-based tool but also into an agent managing complex processes.

Although the tests were conducted in a controlled environment for now, the results demonstrate how dangerous AI can be when it falls into the hands of malicious actors. Existing antivirus software or security solutions may struggle to cope with such agile and autonomous systems.

The study also serves as a critical example for predicting the potential dangers of AI. Thanks to such research, potential threat scenarios can be simulated in advance, allowing security vulnerabilities to be addressed before they arise. However, in an era of rapid evolution in AI, implementing preventative measures in cybersecurity has become even more complex and challenging.

The research has revealed that large language models are not only capable of producing spoken or written content, but also of making independent decisions and executing organized attack plans. This development is forcing a more rigorous discussion of the ethical, security, and control limitations of AI.

Yorum Ekleyin