Security concerns rise as Hugging Face hosts malicious AI models
Hugging Face, the widely-used repository for generative AI models, is grappling with a significant security issue as researchers reveal that thousands of malicious files have been uploaded to its platform. Security firm ProtectAI, along with Hiddenlayer and Wiz, has identified that hackers are using Hugging Face’s repository as a launchpad for concealed threats. These include files embedded with malicious code aimed at poisoning data, stealing payment tokens, and compromising user credentials.
ProtectAI and cybersecurity partners uncover dangerous models
ProtectAI’s CEO, Ian Swanson, noted that the security landscape has evolved, with hackers now embedding harmful code within AI models. Swanson’s team identified over 3,000 malicious files on Hugging Face when they began scanning the site earlier this year. They found that some hackers even posed as reputable companies, such as Meta and Visa, creating fake profiles to lure unsuspecting users into downloading malicious content. In one incident, a model impersonating genomics startup 23andMe had been downloaded thousands of times before it was flagged and removed. This model contained code designed to search for AWS passwords, potentially allowing attackers to steal cloud resources.
Hugging Face responds with new security measures
Hugging Face has since responded by integrating ProtectAI’s scanning tool into its platform. This tool checks models for potentially harmful code, giving users insight into security risks before downloading. Hugging Face CTO Julien Chaumond emphasized that the platform has been verifying profiles of major companies like OpenAI and Nvidia since 2022. Additionally, Hugging Face has been scanning uploaded model files for unsafe code since November 2021.
Chaumond highlighted the platform’s efforts, stating, “We hope that our work and partnership with ProtectAI, and hopefully many more, will help better trust machine learning artifacts to make sharing and adoption easier.” As Hugging Face’s popularity increases, the risk of bad actors targeting its models grows, making enhanced security measures essential.
Security agencies warn businesses of AI model threats
The widespread presence of malicious files on Hugging Face has raised alarms beyond the AI community. In April, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), along with Canada and the UK’s security agencies, issued a joint warning to organizations advising them to scan pre-trained models for malicious code. They recommended isolating these models from critical systems to mitigate potential risks. Hackers often inject rogue instructions into models, which can then hijack a system once deployed, posing a challenge for detection and tracking.
Growth and risks in Hugging Face’s expanding platform
Founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf, Hugging Face shifted from a chatbot app to a machine-learning repository in 2018. Now known as the “GitHub for AI researchers,” Hugging Face has become an indispensable platform in the AI industry, valued at $4.5 billion after raising $235 million in 2023. Yet, the rapid growth of AI adoption has also exposed vulnerabilities in the field, with Chaumond noting, “As our popularity grows, so does the number of potentially bad actors who may want to target the AI community.”
Ensuring safe access to AI tools
The presence of malicious AI models highlights the urgent need for rigorous security measures as AI platforms continue to expand. As Hugging Face partners with ProtectAI and other security firms, developers and researchers are advised to remain vigilant, scanning files and implementing security checks before integrating any external AI models. These steps are critical to maintaining a safe, trustworthy AI development environment amidst a surge in platform usage and heightened risks from cyber threats.