It is hard to deny that the entrance of ChatGPT and other natural language AI models into society has changed the world. Employees are using it to write emails and documents, students are using it to help with homework, and parents are creating customized bedtime stories in seconds. It is a tool that can vastly speed up tasks and increase productivity if used correctly. However, the business world has been rather polarized on its adoption.
Some companies have fully embraced its use because of the efficiency at which their employees can complete menial tasks, the speed at which they can research answers to complex questions, or to get a second opinion when analyzing complex data sets. Other companies have chosen a different approach for myriad reasons, the primary one being fear of compromised company information. However, due to its ease of use and ability to save time, even employees who have been told they can’t use it are likely using it in secret. According to one study, 70% of employees using ChatGPT at work are not telling their bosses. With that said, another study showed: 10.8% have used ChatGPT at work, 8.6% have pasted in company data, and 4.7% have pasted confidential data. However, the real numbers are likely much higher.
The Risk to Private Information
Information that has been fed into these publicly available products is stored so that the product can continuously draw upon better information and provide more accurate answers. This also means that if private information is fed, it can be drawn upon by anyone with the ability and knowledge to do so. Cyber criminals have built and are constantly improving algorithms to data mine private information from these products for exploitation. They are enhancing the phishing capabilities, social engineering attacks, and direct breach, and future scams won’t have the old red flags that we have come to know from cyber training or experience. So, what can you do about it?
How to Protect Your Company
- Employees are using it, and will likely continue to use it; however, according to business.com, only 17% of workers said their employers had communicated clear AI policies. Even if it is against policy, it might be valuable to train employees on responsible use of AI tools. Your customers are not going to care if you forbid using it when their information is leaked by an untrained employee that went against your policy.
- Continue to train your employees using the most current and continuously updated cyber awareness training products. Many cyber training venders are teaching methods that were very relevant 6-12 months ago, but criminals have already adapted. Also, one study looked at the diminishing impact of phishing awareness training over time and concluded that a four-month cycle may be a sweet spot for maintaining awareness. If your employees are only doing training once per year, you may be leaving yourself vulnerable.
- Absolutely never share private or proprietary information with ChatGPT or other public products. Products such as Microsoft Copilot that are linked to your systems, and state that your company information is secure and private, are great alternatives for your employees.
ChatGPT is here and is likely here to stay. Many companies are already using it to have an edge over their competitors, and whether you are for or against it, training your employees on safe AI practices could save you down the road.