Compliance

Sharing Confidential Data with AI

Employees Sharing Data with ChatGPTOur previous blog on AI and cybersecurity showed how criminals use AI to help them write and debug malicious code and create more convincing phishing prompts. However, employees are beginning to utilize ChatGPT and other large language models (LLMs) to increase productivity, raising concerns about sensitive business data.

Businesses are beginning to use ChatGPT to write job descriptions, compose interview questions, create PowerPoint presentations, and refine or check code. However, companies are concerned that employees are giving the chatbot proprietary, secure, or customer data, which may open that information up to the public.

Walmart and Amazon warned their employees against sharing confidential information with ChatGPT. Amazon has already said it has seen internal Amazon data as responses on the chatbot, which means their employees entered the data into the tool to check or refine. JPMorgan Chase and Verizon have blocked employee access to ChatGPT, and the owner, OpenAI, changed how the chatbot learns new information last week. Previously ChatGPT was set to train on users’ input information; that service was turned off following privacy concerns.

From a cybersecurity standpoint, it’s challenging to control copied and pasted data if the employee needs the data to do their job. Like many other cybersecurity vulnerabilities, employees may use a chatbot tool to streamline their workflow without considering the security implications.

Cyberhaven Labs tracked the use of ChatGPT across their customer base and published a report. They found that 5.6% of employees tried to use the tool in their workplace, and 2.3% of employees have entered confidential information into ChatGPT since its launch three months ago. The use of the chatbot tool is growing exponentially, and all categories of business data are being shared with the tool. Client data, source code, personally identifiable information (PII), and protected health information (PHI) have all been shared with the tool in a percentage that grows weekly.

Employees should be aware of the cybersecurity ramifications of sharing company data with any external source not approved by the business. ChatGPT growth in popularity shows how AI will continue to influence business tools for good, but it poses a security risk for business data in its current open state.

Quanexus IT Support Services for Dayton and Cincinnati

Request your free network assessment today. There is no hassle, or obligation.

If you would like more information, contact us here or call 937.885.7272.

Follow us on FacebookTwitter and LinkedIn and stay up to date on by subscribing to our email list.

Posted by Charles Wright in Cybersecurity, Information Security, Recent Posts, Small Business

Insider Security Threats

A new report reveals that the growing use of cloud data makes insider security threats more difficult to detect and prevent. Insider security threats affect more than 34% of businesses and have increased by 47% over the past two years as many industries move to cloud storage.

Most insider security threats come from negligence. Only about one-third of insider threats come from malicious or disgruntled employees or contractors looking to do damage. The other two-thirds of threats are due to users disobeying security rules for convenience or human error. These users may store confidential data on personal devices or share passwords to make their job easier. Negligent users may also share data with a criminal in a phishing attack.

Malicious insider threats include former employees who steal data during their offboarding process or current employees working with third-party organizations seeking to harm the company.

Storing business data in the cloud introduces new insider security threats that may not have been an issue on physical servers. Many businesses are adding cloud storage without an understanding of segmentation, monitoring, and access controls.

Education is the first line of defense against insider security threats. Businesses should have clear guidelines on personal device use, including USB drives, and those policies should be communicated regularly to employees. A large percentage of insider data breaches occur from an employee trying to make their job easier, so it’s essential to communicate how confidential and privileged data should be used.

Next, users should only have access to the data they need to perform their job. The Principle of Least Privilege is still important in cloud data management and is an aspect of security that’s being overlooked in the transition online. Businesses can also implement tools that restrict the copying and transferring of data, so users can access assets to do their job but cannot move them.

Lastly, pay attention to third-party vendors. Often vendors are granted access to cloud data, which may not have the same security policies in place as the original organization. Additionally, the data transfer method to the third party is another avenue for a breach.

Quanexus IT Support Services for Dayton and Cincinnati

Request your free network assessment today. There is no hassle, or obligation.

If you would like more information, contact us here or call 937.885.7272.

Follow us on FacebookTwitter and LinkedIn and stay up to date on by subscribing to our email list.

Posted by Charles Wright in Back to Basics, Cybersecurity, Information Security, Recent Posts, Small Business

Ohio Cybersecurity Court Ruling

The Ohio Supreme Court reversed a lower court ruling and ruled against a local business in a ransomware attack case. EMOI Services, a medical billing company in Kettering, Ohio, was the victim of a ransomware attack in September 2019. The attacker encrypted the company’s data and demanded a ransom of $35,000 for the encryption key. EMOI Services paid the ransom, updated their systems, and were able to get their services back online. After the incident, EMOI filed a claim against their business owner’s insurance policy which included a data compromise endorsement. The insurance company denied the claim responding the policy did not cover “extortion, blackmail, or ransom payments.” Additionally, the insurance provider claimed the policy did not apply to the incident because there was no physical damage to equipment or media.

EMOI sued their insurance provider, Owners Insurance Co., claiming software was damaged in the attack and should be covered by the insurance policy. The Ohio trial court sided with the insurance company, saying EMOI was not entitled to coverage over the attack. EMOI appealed, and the appellate court sided with the medical billing company. The Ohio Second District Court of Appeals ruled that software should be included in the damage of media and should be covered by the policy.

Owners Insurance Co. appealed to the Ohio Supreme Court, and the court overruled the lower court ruling in favor of the insurance company. The Ohio Supreme Court ruled, “Since software is an intangible item that cannot experience direct physical loss or direct physical damage, the endorsement does not apply in this case.” Even though the electronic-equipment endorsement covered media such as disks or cards, the Ohio Supreme Court ruled the information stored on that media was not covered.

Comments from Jack Gerbs, CIO Quanexus, Inc.

I have read the Ohio Supreme Court’s ruling and a few things stood out. First, the policy is defined as a business owners insurance policy. I don’t know if this language would be different if it was a cyber insurance policy. The court made their ruling on the fact that there was no physical damage. This case points out the importance of dealing with a company that understands this new cyber insurance market and why we recommend having an attorney with experience in this area, review cyber policies. As I have mentioned in previous newsletters and blogs, the cyber insurance market is growing very fast and while you are trying to insure against responsible risks it is important to understand the language in your cyber insurance policy.

Quanexus IT Support Services for Dayton and Cincinnati

Request your free network assessment today. There is no hassle, or obligation.

If you would like more information, contact us here or call 937.885.7272.

Follow us on FacebookTwitter and LinkedIn and stay up to date on by subscribing to our email list.

Posted by Charles Wright in Cybersecurity, Information Security, Small Business

AI and Cybersecurity

AI and CybersecurityAI Cybersecurity Opportunities and Threats

Artificial intelligence (AI) is a growing resource utilized in cybersecurity to help detect and prioritize attacks. Watch our latest video blog to see how AI makes enterprise-level security tools more accessible in SIEM solutions. However, concerns are growing about how cybercriminals may also use AI in the near future.

ChatGPT is a new AI chatbot introduced by OpenAI in November of 2022. The chatbot went viral shortly after its release, and users have raised alarms that the software may be able to write malicious code. The chatbot is unique because of its ability to write and debug software in various programming languages. The chatbot can also explain complex topics, compose music, answer test questions, and write student essays. Unlike most chatbots, ChatGPT is conversational and remembers and builds on previous prompts in the same conversation.

Because of the new tool’s ability to write and debug software, there is growing concern criminals will use the chatbot to write malicious software and compose phishing campaigns. ChatGPT has security controls to keep it from writing malware if a user asks. However, developers have successfully bypassed the security controls and recently got the chatbot to write malware code. One concern is that less experienced attackers can use AI-generated code to launch previously impossible malware attacks.

Another concern is chatbots could be used to compose more realistic and convincing phishing attacks. Users are mostly aware that poorly written emails with grammar and punctuation mistakes are malicious. AI presents an opportunity for criminals to create more effective phishing campaigns in any language. A chatbot could also be used to get around email filtering by varying each phishing email sent instead of creating a single template and sending out thousands of malicious emails.

Artificial intelligence presents new opportunities and threats in the cybersecurity landscape. ChatGPT received greater attention in the cybersecurity community because of its immediate popularity and the capabilities developers and journalists have been able to demonstrate in just over a month of use.

Quanexus IT Support Services for Dayton and Cincinnati

Request your free network assessment today. There is no hassle, or obligation.

If you would like more information, contact us here or call 937.885.7272.

Follow us on FacebookTwitter and LinkedIn and stay up to date on by subscribing to our email list.

Posted by Charles Wright in Cybersecurity, Information Security, Recent Posts, Small Business