5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether.
A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security.
Why Worry About ChatGPT?
The e-guide addresses the growing concern that unrestricted GenAI usage could lead to unintentional data exposure. For example, as highlighted by incidents such as the Samsung data leak. In this case, employees accidentally exposed proprietary code while using ChatGPT, leading to a complete ban on GenAI tools within the company. Such incidents underscore the need for organizations to develop robust policies and controls to mitigate the risks associated with GenAI.
Our understanding of the risk is not just anecdotal. According to research by LayerX Security:
- 15% of enterprise users have pasted data into GenAI tools.
- 6% of enterprise users have pasted sensitive data, such as source code, PII, or sensitive organizational information, into GenAI tools.
- Among the top 5% of GenAI users who are the heaviest users, a full 50% belong to R&D.
- Source code is the primary type of sensitive data that gets exposed, accounting for 31% of exposed data
Key Steps for Security Managers
What can security managers do to allow the use of GenAI without exposing the organization to data exfiltration risks? Key highlights from the e-guide include the following steps:
- Mapping AI Usage in the Organization – Start by understanding what you need to protect. Map who is using GenAI tools, in which ways, for what purposes, and what types of data are being exposed. This will be the foundation of an effective risk management strategy.
- Restricting Personal Accounts – Next, leverage the protection offered by GenAI tools. Corporate GenAI accounts provide built-in security measures that can significantly reduce the risk of sensitive data leakage. This includes restrictions on the data being used for training purposes, restrictions on data retention, account sharing limitations, anonymization, and more. Note that this requires enforcing the use of non-personal accounts when using GenAI (which requires a proprietary tool to do so).
- Prompting Users – As a third step, use the power of your own employees. Simple reminder messages that pop up when using GenAI tools will help create awareness among employees of the potential consequences of their actions and of organizational policies. This can effectively reduce risky behavior.
- Blocking Sensitive Information Input – Now it’s time to introduce advanced technology. Implement automated controls that restrict the input of large amounts of sensitive data into GenAI tools. This is especially effective for preventing employees from sharing source code, customer information, PII, financial data, and more.
- Restricting GenAI Browser Extensions – Finally, prevent the risk of browser extensions. Automatically manage and classify AI browser extensions based on risk to prevent their unauthorized access to sensitive organizational data.
In order to enjoy the full productivity benefits of Generative AI, enterprises need to find the balance between productivity and security. As a result, GenAI security must not be a binary choice between allowing all AI activity or blocking it all. Rather, taking a more nuanced and fine-tuned approach will enable organizations to reap the business benefits, without leaving the organization exposed. For security managers, this is the way to becoming a key business partner and enabler.
Download the guide to learn how you can also easily implement these steps immediately.
The post “5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage” appeared first on The Hacker News
Source:The Hacker News – [email protected] (The Hacker News)