Microsoft Suing Group for Exploiting AI Service

Microsoft Company Logo

Microsoft has taken legal action against a group the company claims intentionally developed and used tools to bypass the safety guardrails of its cloud AI products.

In a complaint filed in December in the U.S. District Court for the Eastern District of Virginia, Microsoft alleges that a group of anonymous defendants, termed as "Does," illicitly accessed its Azure OpenAI Service using stolen customer credentials. These actions allegedly involved bespoke software aimed at breaching Microsoft’s secure systems, specifically targeting the technologies powered by ChatGPT maker OpenAI.

Allegations of Misconduct

Microsoft accuses the defendants of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering laws due to their unauthorized access and utilization of Microsoft's software to develop "offensive" and "harmful" content. Although the specifics of the illicit content remain undisclosed, Microsoft seeks both injunctive relief and damages.

Discovery of Credential Theft

The lawsuit reveals that Microsoft detected unauthorized usage of Azure OpenAI Service credentials in July 2024. These credentials, specifically API keys, are used to authenticate users or applications. The unauthorized use prompted an investigation, revealing that the API keys belonged to paying customers and had been stolen. "Defendants have engaged in a pattern of systematic API key theft," the complaint states, suggesting a structured approach to accessing multiple customers' data without permission.

The De3u Tool and Its Capabilities

Microsoft's filing outlines that the defendants created a "hacking-as-a-service" scheme using stolen API keys. Central to this scheme was a tool named de3u, designed to leverage these stolen keys to generate images using the DALL-E model, part of the Azure OpenAI Service. The tool also reportedly circumvented Microsoft's content filtering, preventing the revision of prompts that might trigger safety measures.



A screenshot of the De3u tool from the Microsoft complaint.

Image Credits:Microsoft

The GitHub repository hosting the de3u project has since been removed, as Microsoft attempts to mitigate further misuse.

Microsoft's complaint alleges that the combination of de3u's features and the unlawful API access enabled the defendants to essentially reverse engineer methods to bypass Microsoft's security protocols. This resulted in unauthorized access to protected systems, leading to significant damage and financial loss.

Ongoing Actions and Mitigations

In a blog post, Microsoft disclosed that a court has allowed it to seize a website crucial to the defendants' operations, which will aid in evidence collection and disruption of further illicit activities. Furthermore, Microsoft has implemented unspecified "countermeasures" and enhanced the safety protocols within the Azure OpenAI Service to combat similar threats in the future.

Read more