Home Security A New Headache for SaaS Security Teams

A New Headache for SaaS Security Teams

by crpt os


The introduction of Open AI’s ChatGPT was a defining moment for the software industry, touching off a GenAI race with its November 2022 release. SaaS vendors are now rushing to upgrade tools with enhanced productivity capabilities that are driven by generative AI.

Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams in mundane email writing, help marketers produce unique content at low cost, and enable teams and creatives to brainstorm new ideas.

Recent significant GenAI product launches include Microsoft 365 Copilot, GitHub Copilot, and Salesforce Einstein GPT. Notably, these GenAI tools from leading SaaS providers are paid enhancements, a clear sign that no SaaS provider will want to miss out on cashing in on the GenAI transformation. Google will soon launch its SGE “Search Generative Experience” platform for premium AI-generated summaries rather than a list of websites.

At this pace, it’s just a matter of a short time before some form of AI capability becomes standard in SaaS applications.

Yet, this AI progress in the cloud-enabled landscape does not come without new risks and downsides for users. Indeed, the wide adoption of GenAI apps in the workplace is rapidly raising concerns about exposure to a new generation of cybersecurity threats.

Learn how to improve your SaaS security posture and mitigate AI risk

Reacting to the risks of GenAI

GenAI works on training models that generate new data mirroring the original based on information that is shared with the tools by users.

As ChatGPT is now warning users when they log on, “Don’t share sensitive info,” and “check your facts.” When asked about the risks of GenAI, ChatGPT replies: “Data submitted to AI models like ChatGPT may be used for model training and improvement purposes, potentially exposing it to researchers or developers working on these models.”

This exposure expands the attack surface of organizations that share internal information in cloud-based GenAI systems. New risks include the danger of IP leakage, sensitive and confidential customer data, and PII, as well as threats from the use of deepfakes by cybercriminals using stolen information for phishing scams and identity theft.

These concerns, as well as challenges to meet compliance and government requirements, are triggering a GenAI application backlash, especially by industries and sectors that process confidential and sensitive data. According to a recent study by Cisco, more than one in four organizations have already banned the use of GenAI over privacy and data security risks.

The banking industry was among the first sectors to ban the use of GenAI tools in the workplace. Financial services leaders are hopeful about the benefits of using artificial intelligence to become more efficient and to help employees do their jobs, but 30% still ban the use of generative AI tools within their company, according to a survey conducted by Arizent.

Last month, the US Congress imposed a ban on the use of Microsoft’s Copilot on all government-issued PCs to enhance cybersecurity measures. “The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services,” the House’s Chief Administrative Officer Catherine Szpindor said, according to an Axios report. This ban follows the government’s previous decision to block ChatGPT.

Dealing with a lack of oversight

Reactive GenAI bans aside, organizations are undoubtedly having trouble effectively controlling the use of GenAI as the applications penetrate the workplace without training, oversight or the knowledge of employers.

According to a recent study by Salesforce, more than half of GenAI adopters use unapprovedtools at work.The research found that despite the benefits GenAI offers, a lack of clearly defined policies around its use may be putting businesses at risk.

The good news is that this might start to change now if employers follow new guidance from the US government to bolster AI governance.

In a statement issued earlier this month, Vice President Kamala Harris directed all federal agencies to designate a Chief AI Officer with the “experience, expertise, and authority to oversee all AI technologies … to make sure that AI is used responsibly.”

With the US government taking the lead to encourage the responsible use of AI and dedicated resources to manage the risks, the next step is to find the methods to safely manage the apps.

Regaining control of GenAI apps

The GenAI revolution, whose risks remain in the realm of the unknown unknown, comes at a time when the focus on perimeter protection is becoming increasingly outdated.

Threat actors today are increasingly focused on the weakest links within organizations, such as human identities, non-human identities, and misconfigurations in SaaS applications. Nation-state threat actors have recently used tactics such as brute-force password sprays and phishing to successfully deliver malware and ransomware, as well as carry out other malicious attacks on SaaS applications.

Complicating efforts to secure SaaS applications, the lines between work and personal life are now blurred when it comes to the use of devices in the hybrid work model. With the temptations that come with the power of GenAI, it will become impossible to stop employees from using the technology, whether sanctioned or not.

The rapid uptake of GenAI in the workforce should, therefore, be a wake-up call for organizations to reevaluate whether they have the security tools to handle the next generation of SaaS security threats.

To regain control and get visibility into SaaS GenAI apps or apps that have GenAI capabilities, organizations can turn to advanced zero-trust solutions such as SSPM (SaaS Security Posture Management) that can enable the use of AI while strictly monitoring its risks.

Getting a view of every connected AI-enabled app and measuring its security posture for risks that could undermine SaaS security will empower organizations to prevent, detect, and respond to new and evolving threats.

Learn how to kickstart SaaS security for the GenAI age

The Hacker News

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.





Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex