Home Security Researchers Weaponize ML Fashions With Ransomware

Researchers Weaponize ML Fashions With Ransomware

by crpt os


As if defenders of software supply chains didn’t have enough attack vectors to worry about, they now have a new one: machine learning models.

ML models are at the heart of technologies such as facial recognition and chatbots. Like open-source software repositories, the models are often downloaded and shared by developers and data scientists, so a compromised model could have a crushing impact on many organizations simultaneously.

Researchers at HiddenLayer, a machine language security company, revealed in a blog on Tuesday how an attacker could use a popular ML model to deploy ransomware.

The method described by the researchers is similar to how hackers use steganography to hide malicious payloads in images. In the case of the ML model, the malicious code is hidden in the model’s data.

According to the researchers, the steganography process is fairly generic and can be applied to most ML libraries. They added that the process need not be limited to embedding malicious code in the model and could also be used to exfiltrate data from an organization.

Planting malware in a machine language model allows it to bypass traditional anti-malware defenses. (Image courtesy of HiddenLayer)


Attacks can be operating system agnostic, too. The researchers explained that the OS and architecture-specific payloads could be embedded in the model, where they can be loaded dynamically at runtime, depending on the platform.

Flying Under Radar

Embedding malware in an ML model offers some benefits to an adversary, observed Tom Bonner, senior director of adversarial threat research at the Austin, Texas-based HiddenLayer.

“It allows them to fly under the radar,” Bonner told TechNewsWorld. “It’s not a technique that’s detected by current antivirus or EDR software.”

“It also opens new targets for them,” he said. “It’s a direct route into data scientist systems. It’s possible to subvert a machine learning model hosted on a public repository. Data scientists will pull it down and load it up, then become compromised.”

“These models are also downloaded to various machine-learning ops platforms, which can be pretty scary because they can have access to Amazon S3 buckets and steal training data,” he continued.

“Most of [the] machines running machine-learning models have big, fat GPUs in them, so bitcoin miners could be very effective on those systems, as well,” he added.

HiddenLayer demonstrates how its hijacked pre-trained ResNet model executed a ransomware sample the moment it was loaded into memory by PyTorch on its test machine.


First Mover Advantage

Threat actors often like to exploit unanticipated vulnerabilities in new technologies, noted Chris Clements, vice president of solutions architecture at Cerberus Sentinel, a cybersecurity consulting and penetration testing company in Scottsdale, Ariz.

“Attackers looking for a first mover advantage in these frontiers can enjoy both less preparedness and proactive protection from exploiting new technologies, Clements told TechNewsWorld.

“This attack on machine-language models seems like it may be the next step in the cat-and-mouse game between attackers and defenders,” he said.

Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation in Tel Aviv, Israel, pointed out that threat actors will leverage whatever vectors they can to execute their attacks.

“This is an unusual vector that could sneak past quite a few common tools if done carefully,” Parkin told TechNewsWorld.

Traditional anti-malware and endpoint detection and response solutions are designed to detect ransomware based on pattern-based behaviors, including virus signatures and monitoring key API, file, and registry requests on Windows for potential malicious activity, explained Morey Haber, chief security officer at BeyondTrust, a maker of privileged account management and vulnerability management solutions in Carlsbad, Calif.

“If machine learning is applied to the delivery of malware like ransomware, then the traditional attack vectors and even detection methods can be altered to appear non-malicious,” Haber told TechNewsWorld.

Potential for Widespread Damage

Attacks on machine-language models are on the rise, noted Karen Crowley, director of product solutions at Deep Instinct, a deep-learning cybersecurity company in New York City.

“It isn’t significant yet, but the potential for widespread damage is there,” Crowley told TechNewsWorld.

“In the supply chain, if the data is poisoned so that when the models are trained, the system is poisoned as well, that model could be making decisions that reduce security instead of strengthening it,” she explained.

“In the cases of Log4j and SolarWinds, we saw the impact to not just the organization who owns the software, but all of its users in that chain,” she said. “Once ML is introduced, that damage could multiply quickly.”

Casey Ellis, CTO and founder of Bugcrowd, which operates a crowdsourced bug bounty platform, noted that attacks on ML models could be part of a larger trend of attacks on software supply chains.

“In the same way that adversaries may attempt to compromise the supply chain of software applications to insert malicious code or vulnerabilities, they may also target the supply chain of machine learning models to insert malicious or biased data or algorithms,” Ellis told TechNewsWorld.

“This can have significant impacts on the reliability and integrity of AI systems and can be used to undermine trust in the technology,” he said.

Pablum for Script Kiddies

Threat actors may be showing an increased interest in machine models because they’re more vulnerable than people thought they were.

“People have been aware that this was possible for some time, but they didn’t realize how easy it is,” Bonner said. “It’s quite trivial to string an attack together with a few simple scripts.”

“Now that people realize how easy it is, it’s in the realm of script kiddies to pull it off,” he added.

Clements agreed that the researchers have shown that it doesn’t require hardcore ML/AI data science expertise to insert malicious commands into training data that can be then triggered by ML models at runtime.

However, he continued, it does require more sophistication than run-of-the-mill ransomware attacks that mainly rely on simple credential stuffing or phishing to launch.

“Right now, I think the specific attack vector’s popularity is likely to be low for the foreseeable future,” he said.

“Exploiting this requires an attacker compromising an upstream ML model project used by downstream developers, tricking the victim into downloading a pre-trained ML model with the malicious commands embedded from an unofficial source, or compromising the private dataset used by ML developers to insert the exploits,” he explained.

“In each of these scenarios,” he continued, “it seems like there would be much easier and straightforward ways to compromise the target aside from inserting obfuscated exploits into training data.”



Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex