Sharing Confidential Data with AI

Employees Sharing Data with ChatGPTOur previous blog on AI and cybersecurity showed how criminals use AI to help them write and debug malicious code and create more convincing phishing prompts. However, employees are beginning to utilize ChatGPT and other large language models (LLMs) to increase productivity, raising concerns about sensitive business data.

Businesses are beginning to use ChatGPT to write job descriptions, compose interview questions, create PowerPoint presentations, and refine or check code. However, companies are concerned that employees are giving the chatbot proprietary, secure, or customer data, which may open that information up to the public.

Walmart and Amazon warned their employees against sharing confidential information with ChatGPT. Amazon has already said it has seen internal Amazon data as responses on the chatbot, which means their employees entered the data into the tool to check or refine. JPMorgan Chase and Verizon have blocked employee access to ChatGPT, and the owner, OpenAI, changed how the chatbot learns new information last week. Previously ChatGPT was set to train on users’ input information; that service was turned off following privacy concerns.

From a cybersecurity standpoint, it’s challenging to control copied and pasted data if the employee needs the data to do their job. Like many other cybersecurity vulnerabilities, employees may use a chatbot tool to streamline their workflow without considering the security implications.

Cyberhaven Labs tracked the use of ChatGPT across their customer base and published a report. They found that 5.6% of employees tried to use the tool in their workplace, and 2.3% of employees have entered confidential information into ChatGPT since its launch three months ago. The use of the chatbot tool is growing exponentially, and all categories of business data are being shared with the tool. Client data, source code, personally identifiable information (PII), and protected health information (PHI) have all been shared with the tool in a percentage that grows weekly.

Employees should be aware of the cybersecurity ramifications of sharing company data with any external source not approved by the business. ChatGPT growth in popularity shows how AI will continue to influence business tools for good, but it poses a security risk for business data in its current open state.

Quanexus IT Support Services for Dayton and Cincinnati

Request your free network assessment today. There is no hassle, or obligation.

If you would like more information, contact us here or call 937.885.7272.

Follow us on FacebookTwitter and LinkedIn and stay up to date on by subscribing to our email list.

Posted by Charles Wright