Microsoft has taken legal action against a group that violated its content creation policies by illegally accessing Azure OpenAI services. The group used stolen customer credentials and proprietary software to hack into its systems and create malicious content, the company said in a lawsuit filed with the US Federal Court for the Eastern District of Virginia. Details are in our news…
Microsoft sues a group that hacked AI services
First, Microsoft found in the investigation that the attackers gained access to Azure OpenAI services by stealing API keys. These API keys are used to access Microsoft’s services such as ChatGPT and other AI models. With the stolen keys, the group abused the system to bypass content filtering mechanisms to produce illegal and harmful content.
The group also developed a software tool called de3u. This tool used the stolen API keys to generate images from the DALL-E model without any programming knowledge. It also bypassed content filters, making it easier to create malicious images.
Microsoft has shut down the code repository on GitHub for the software created by the attackers and announced “countermeasures” against the attacks. The company also seized, with court approval, a website that facilitated the criminals’ operations. It is currently gathering evidence and analyzing the criminals’ revenue models through this website.
In addition, Microsoft said it has begun implementing new security measures for its Azure OpenAI service, but did not elaborate on the details of these new measures. According to Microsoft, the group developed a “hack-as-a-service” model using stolen API keys. With this system, users can participate in illegal content production processes without the need to write code.
Do you think these measures taken by Microsoft are enough for AI security? You can share your comments in the section below…