Microsoft Files Amended Complaint in Case Against Malicious Use of Its Generative AI Services

Summary
On February 27, 2025, Microsoft filed an amended complaint in its Eastern District of Virginia civil litigation involving alleged malicious use of its generative AI services. Microsoft filed the original complaint under seal in December 2024, which the court unsealed in mid-January. The amended complaint names certain developers of tools that Microsoft alleges were designed to bypass the guardrails of its AI services. The developers are individuals in Iran, the United Kingdom, Hong Kong and Vietnam, and part of an alleged global cybercrime network that Microsoft tracks as Storm-2139. The complaint was filed by Microsoft’s Digital Crimes Unit (DCU) as part of broader efforts to prevent the abuse of generative AI.
The complaint alleges that the defendants engaged in a malicious scheme to misuse Microsoft’s services “for improper and illegal purposes, including unlawful generation of images depicting misogyny, non-consensual intimate images of celebrities, and other sexually explicit content,” using its Azure OpenAI Service and DALL-E image generation technology. Specifically, the complaint alleges that the defendants used stolen API keys and technical circumvention measures to gain unauthorized access to Microsoft’s Azure-based implementation of OpenAI’s generative AI models and used that access to circumvent Microsoft’s safety measures and generate thousands of harmful images.
The complaint asserts claims for relief based on violations of the Computer Fraud & Abuse Act, the Digital Millennium Copyright Act, False Designation of Origin under the Lanham Act, RICO violations including Wire Fraud and Access Device Fraud, Common Law Trespass to Chattels and Tortious Interference.