Employees are leaking data via GenAI tools, here's what companies should do

Employees are leaking data via GenAI tools, here's what companies should do

While celebrities and newspapers like The New York Times and Scarlett Johansson are challenging OpenAI, the poster child for the Generative AI revolution, it seems that employees have already had their say. ChatGPT and similar productivity and innovation tools are growing in popularity. According to GlassDoor, half of employees use ChatGPT, and 15% are pasting company and customer data into GenAI applications. according to LayerX's “GenAI Data Exposure Risk Report”.

For organizations, the use of ChatGPT, Claude, Gemini and similar tools is a blessing. These machines make their employees more productive, innovative and creative. But they can also turn into a wolf in sheep’s clothing. Many CISOs are concerned about the risks of data loss for the enterprise. Fortunately, things are moving fast in the tech industry and there are already solutions to prevent data loss via ChatGPT and all other GenAI tools, and to make enterprises the fastest and most productive versions of themselves.

Gen AI: The Information Security Dilemma

With ChatGPT and all other GenAI tools, the sky’s the limit for what employees can accomplish for the business — from composing emails to designing complex products to solving intricate legal or accounting problems. And yet, organizations face a dilemma with generative AI applications. While the productivity benefits are straightforward, there are also risks of data loss.

Employees are excited about the potential of generative AI tools, but they are not vigilant in their use. When employees use GenAI tools to process or generate content and reports, they are also sharing sensitive information such as product code, customer data, financial information, and internal communications.

Imagine a developer trying to fix bugs in their code. Instead of endlessly sifting through lines of code, they can paste it into ChatGPT and ask it to find the bug. ChatGPT saves them time, but it can also store proprietary source code. This code can then be used to train the model, meaning a competitor could find it via future prompts. Or, it can simply be stored on OpenAI’s servers, potentially leaked if security measures are breached.

Another scenario is a financial analyst entering the company’s numbers and asking for help with analysis or forecasting. Or a salesperson or customer service representative entering sensitive customer data and asking for help composing personalized emails. In all of these examples, data that would otherwise be heavily protected by the enterprise is being freely shared with unknown external sources and can easily end up in the hands of malicious and ill-intentioned perpetrators.

“I want to be a business enabler, but I also have to think about protecting my organization’s data,” said a Chief Security Information Officer (CISO) at a large company, who asked to remain anonymous. “ChatGPT is the new cool kid on the block, but I can’t control what data employees are sharing with it. Employees get frustrated, boardrooms get frustrated, but we have patents pending, sensitive code, we’re planning to go public in two years — that’s not information we can afford to risk.”

This CISO's concern is based on data. A recent report from LayerX found that 4% of employees paste sensitive data into GenAI on a weekly basis. This includes internal company data, source code, PII, customer data, and more. When this data is typed or pasted into ChatGPT, it is essentially being exfiltrated, through the hands of the employees themselves.

Without the right security solutions to manage such data loss, organizations must choose: Productivity and innovation, or security? With GenAI becoming the fastest-adopting technology in history, organizations will soon be unable to say no to employees who want to accelerate and innovate with Gen AI. That would be like saying no to the cloud. Or email…

The new browser security solution

A new category of security vendors is on a mission to enable GenAI adoption without closing the security risks that come with using it. These are browser security solutions. The idea is that employees are interacting with GenAI tools through the browser or through extensions they download to their browser, so that’s where the risk lies. By monitoring the data that employees type into the GenAI app, browser security solutions deployed on the browser can display alerts to employees, informing them of the risk, or if necessary, they can block the pasting of sensitive information into GenAI tools in real time.

“As GenAI tools are so beloved by employees, security technology needs to be just as compliant and accessible,” said Or Eshed, CEO and co-founder of LayerX, a Enterprise browser extension company. “Employees are not aware that their actions are risky, so security needs to ensure that their productivity is not blocked and that they are educated about any risky actions they take so that they can learn instead of holding grudges. Otherwise, security teams will have a hard time implementing GenAI data loss prevention and other security measures. But if they succeed, it’s a win-win-win.”

The technology behind this capability is based on detailed analysis of employee actions and browsing events, which are examined to detect sensitive information and potentially malicious activity. Rather than hindering the company’s progress or leaving employees at their desks fretting about it getting in the way of their productivity, the idea is to keep everyone happy and working while ensuring that no sensitive information is typed or pasted into GenAI tools, which also means happier boards and shareholders. And of course, happier information security teams.

History repeats itself

Every technological innovation has had its share of backlash. That’s the nature of people and businesses. But history shows that organizations that embraced innovation tended to outpace and outperform other players who tried to keep things the way they were.

This doesn’t require naiveté or an all-in-one approach. Rather, it requires us to take a 360-degree approach to innovation and come up with a plan that covers all bases and addresses data loss risks. Fortunately, enterprises are not alone in this endeavor. They have the support of a new category of security vendors that offer solutions to prevent data loss through GenAI.

VentureBeat's editorial and news team had no involvement in the creation of this content.