Home Security Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

by


Jan 11, 2025Ravie LakshmananAI Security / Cybersecurity

Microsoft has revealed that it’s pursuing legal action against a “foreign-based threat–actor group” for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content.

The tech giant’s Digital Crimes Unit (DCU) said it has observed the threat actors “develop sophisticated software that exploited exposed customer credentials scraped from public websites,” and “sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services.”

The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024.

The Windows maker said it has since revoked the threat-actor group’s access, implemented new countermeasures, and fortified its safeguards to prevent such activity from occurring in the future. It also said it obtained a court order to seize a website (“aitism[.]net”) that was central to the group’s criminal operation.

Cybersecurity

The popularity of AI tools like OpenAI ChatGPT has also had the consequence of threat actors abusing them for malicious intents, ranging from producing prohibited content to malware development. Microsoft and OpenAI have repeatedly disclosed that nation-state groups from China, Iran, North Korea, and Russia are using their services for reconnaissance, translation, and disinformation campaigns.

Court documents show that at least three unknown individuals are behind the operation, leveraging stolen Azure API keys and customer Entra ID authentication information to breach Microsoft systems and create harmful images using DALL-E in violation of its acceptable use policy. Seven other parties are believed to have used the services and tools provided by them for similar purposes.

The manner in which the API keys are harvested is currently not known, but Microsoft said the defendants engaged in “systematic API key theft” from multiple customers, including several U.S. companies, some of which are located in Pennsylvania and New Jersey.

“Using stolen Microsoft API Keys that belonged to U.S.-based Microsoft customers, defendants created a hacking-as-a-service scheme – accessible via infrastructure like the ‘rentry.org/de3u’ and ‘aitism.net’ domains – specifically designed to abuse Microsoft’s Azure infrastructure and software,” the company said in a filing.

According to a now removed GitHub repository, de3u has been described as a “DALL-E 3 frontend with reverse proxy support.” The GitHub account in question was created on November 8, 2023.

It’s said the threat actors took steps to “cover their tracks, including by attempting to delete certain Rentry.org pages, the GitHub repository for the de3u tool, and portions of the reverse proxy infrastructure” following the seizure of “aitism[.]net.”

Microsoft noted that the threat actors used de3u and a bespoke reverse proxy service, called the oai reverse proxy, to make Azure OpenAl Service API calls using the stolen API keys in order to unlawfully generate thousands of harmful images using text prompts. It’s unclear what type of offensive imagery was created.

The oai reverse proxy service running on a server is designed to funnel communications from de3u user computers through a Cloudflare tunnel into the Azure OpenAI Service, and transmit the responses back to the user device.

“The de3u software allows users to issue Microsoft API calls to generate images using the DALL-E model through a simple user interface that leverages the Azure APIs to access the Azure OpenAI Service,” Redmond explained.

Cybersecurity

“Defendants’ de3u application communicates with Azure computers using undocumented Microsoft network APIs to send requests designed to mimic legitimate Azure OpenAPI Service API requests. These requests are authenticated using stolen API keys and other authenticating information.”

It’s worth pointing out that the use of proxy services to illegally access LLM services was highlighted by Sysdig in May 2024 in connection with an LLMjacking attack campaign targeting AI offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI using stolen cloud credentials and selling the access to other actors.

“Defendants have conducted the affairs of the Azure Abuse Enterprise through a coordinated and continuous pattern of illegal activity in order to achieve their common unlawful purposes,” Microsoft said.

“Defendants’ pattern of illegal activity is not limited to attacks on Microsoft. Evidence Microsoft has uncovered to date indicates that the Azure Abuse Enterprise has been targeting and victimizing other AI service providers.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.





Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex