Home Security To Fight Gen AI-Powered E-mail Threats, Battle Hearth With Hearth

To Fight Gen AI-Powered E-mail Threats, Battle Hearth With Hearth

by crpt os


Human brain power is no match for hackers emboldened with artificial intelligence-powered digital smash-and-grab attacks using email deceptions. Consequently, cybersecurity defenses must be guided by AI solutions that know hackers’ strategies better than they do.

This approach of fighting AI with better AI surfaced as an ideal strategy in research conducted in March by cyber firm Darktrace to sniff out insights into human behavior around email. The survey confirmed the need for new cyber tools to counter AI-driven hacker threats targeting businesses.

The study sought a better understanding of how employees globally react to potential security threats. It also charted their growing knowledge of the need for better email security.

Darktrace’s global survey of 6,711 employees across the U.S., U.K., France, Germany, Australia, and the Netherlands found that respondents experienced a 135% increase in “novel social engineering attacks” across thousands of active Darktrace email customers from January to February 2023. The results corresponded with the widespread adoption of ChatGPT.

These novel social engineering attacks use sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale, according to researchers.

One of the three most significant takeaways from the research is that most employees are concerned about the threat of AI-generated emails, according to Max Heinemeyer, chief product officer for Darktrace.

“This is not surprising, since these emails are often indistinguishable from legitimate communications and some of the signs that employees typically look for to spot a ‘fake’ include signals like poor spelling and grammar, which chatbots are proving highly efficient at circumventing,” he told TechNewsWorld.

Research Highlights

Darktrace asked retail, catering, and leisure companies how concerned they are, if at all, that hackers can use generative AI to create scam emails indistinguishable from genuine communication. Eighty-two percent said they are concerned.

More than half of all respondents indicated their awareness of what makes employees think an email is a phishing attack. The top three are invitations to click a link or open an attachment (68%), unknown sender or unexpected content (61%), and poor use of spelling and grammar (61%).


That is significant and troubling, as 45% of Americans surveyed noted that they had fallen prey to a fraudulent email, according to Heinemeyer.

“It is unsurprising that employees are concerned about their ability to verify the legitimacy of email communications in a world where AI chatbots are increasingly able to mimic real-world conversations and generate emails that lack all of the common signs of a phishing attack, such as malicious links or attachments,” he said.

Other key results of the survey include the following:

  • 70% of global employees have noticed an increase in the frequency of scam emails and texts in the last six months
  • 87% of global employees are concerned about the amount of personal information available about them online that could be used in phishing and other email scams
  • 35% of respondents have tried ChatGPT or other gen AI chatbots

Human Error Guardrails

Widespread accessibility to generative AI tools like ChatGPT and the increasing sophistication of nation-state actors means that email scams are more convincing than ever, noted Heinemeyer.

Innocent human error and insider threats remain an issue. Misdirecting an email is a risk for every employee and every organization. Nearly two in five people have sent an important email to the wrong recipient with a similar-looking alias by mistake or due to autocomplete. This error rises to over half (51%) in the financial services industry and 41% in the legal sector.

Regardless of fault, such human errors add another layer of security risk that is not malicious. A self-learning system can spot this error before the sensitive information is incorrectly shared.

In response, Darktrace unveiled a significant update to its globally deployed email solution. It helps to bolster email security tools as organizations continue to rely on email as their primary collaboration and communication tool.

“Email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against evolving email threats,” he said.

Darktrace’s latest email capability includes behavioral detections for misdirected emails that prevent intellectual property or confidential information from being sent to the wrong recipient, according to Heinemeyer.

AI Cybersecurity Initiative

By understanding what is normal, AI defenses can determine what does not belong in a particular individual’s inbox. Email security systems get this wrong too often, with 79% of respondents saying that their company’s spam/security filters incorrectly stop important legitimate emails from reaching their inbox.

With a deep understanding of the organization and how the individuals within it interact with their inbox, AI can determine for every email whether it is suspicious and should be actioned or if it is legitimate and should remain untouched.

“Tools that work from a knowledge of historical attacks will be no match for AI-generated attacks,” offered Heinemeyer.


Attack analysis shows a notable linguistic deviation — semantically and syntactically — compared to other phishing emails. That leaves little doubt that traditional email security tools, which work from a knowledge of historical threats, will fall short of picking up the subtle indicators of these attacks, he explained.

Bolstering this, Darktrace’s research revealed that email security solutions, including native, cloud, and static AI tools, take an average of 13 days following the launch of an attack on a victim until the breach is detected.

“That leaves defenders vulnerable for almost two weeks if they rely solely on these tools. AI defenses that understand the business will be crucial for spotting these attacks,” he said.

AI-Human Partnerships Needed

Heinemeyer believes the future of email security lies in a partnership between AI and humans. In this arrangement, the algorithms are responsible for determining whether the communication is malicious or benign, thereby taking the burden of responsibility away from the human.

“Training on good email security practices is important, but it will not be enough to stop AI-generate threats that look exactly like benign communications,” he warned.

One of the vital revolutions AI enables in the email space is a deep understanding of “you.” Instead of trying to predict attacks, an understanding of your employees’ behaviors must be determined based on their email inbox, their relationships, tone, sentiments, and hundreds of other data points, he reasoned.

“By leveraging AI to combat email security threats, we not only reduce risk but revitalize organizational trust and contribute to business outcomes. In this scenario, humans are freed up to work on a higher level, more strategic practices,” he said.

Not a Completely Unsolvable Cybersecurity Problem

The threat of offensive AI has been researched on the defensive side for a decade. Attackers will inevitably use AI to upskill their operations and maximize ROI, noted Heinemeyer.

“But this is not something we would consider unsolvable from a defense perspective. Ironically, generative AI may be worsening the social engineering challenge, but AI that knows you could be the parry,” he predicted.

Darktrace has tested offensive AI prototypes against the company’s technology to continuously test the efficacy of its defenses ahead of this inevitable evolution in the attacker landscape. The company is confident that AI armed with a deep understanding of the business will be the most powerful way to defend against these threats as they continue to evolve.



Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex