Things to consider before inviting AI into your organization

By Brent Zomerlei, BS in Computer Science

Artificial Intelligence (AI) tools for business are becoming commonplace. AI is poised to be the next big Information Technology (IT) game changer. Considering the possibilities, AI may ultimately have a larger impact than mobile computing and devices like the iPhone. AI, specifically generative AI, is dominating IT advertising and marketing and is generating lots of interest from average technology users due to easy access and low cost of available systems. Here are some practical considerations related to using AI systems safely in your company.

What can you do with generative AI tools today in your organization? Generative AI tools are being used by chatbots to provide simple customer support and access to frequently asked questions. Generative AI can be used to read transcripts of meetings and provide summaries, it can generate content for marketing or email drafting, and it can assist with data entry tasks. Language translation and programming code writing are additional areas where AI can be a competent assistant.

Generative AI tools like ChatGPT or Google’s Bard are a sub-group called Large Language Model, or LLM. A simple explanation of the LLM is a system that can read and understand the context of a question and then formulate an answer based on its database of content from books, articles, and websites accessible via the internet.

The key thing to understand about large language model AI systems is that the programmers will use your questions, also known as prompts, to improve the performance of their models. There should be no expectation of privacy assumed when using these public, cloud-based tools. You and your employees must keep this in mind before any utilization of these AI tools. You can learn the specifics of the tool’s privacy policy by examining the end-user license agreement (EULA).

Employees in your company may already be using AI tools without explicit permission from management. When using company related tools with ChatGPT or other tools, users must be careful not to expose company secrets and confidential data. According to data from Cyberhaven, as of June 2023, 11% of employees have used ChatGPT and 9 percent have pasted company information into ChatGPT. Nearly 5 percent of this data was estimated to be confidential data1 .

There have also been examples of bugs or errors in the system exposing user data to other users. On March 21, 2023 ChatGPT was temporarily shut down to fix a problem that linked prior conversations with the wrong user, potentially exposing confidential data to the incorrect user2. This expectation of privacy was broken in this situation. Do not share information that you would not share in a public forum.

On April 6, 2023, Samsung discovered employees were debugging source code and putting transcripts of meetings with confidential data into ChatGPT. This was only a few weeks after Samsung lifted a ban on ChatGPT. Samsung subsequently enacted procedures to limit the amount of data that could be sent to about 750 words.

Using an LLM AI within your business environment carries risks, however, there are numerous ways to mitigate this risk.

Blanket Ban. Employers can establish policies and implement procedures to prevent employees from accessing these sites or downloading software on company assets.

Access Controls. If your company has implemented robust protocols to limit access to sensitive data, consider expanding these controls to include generative AI systems.

Enterprise License. Companies that choose to use these tools can seek special license agreements that will limit what the vendor can do with the inputs they receive. Companies can have all inputs eliminated from being used for training, or they could only allow the data to be used to improve the model, but only for the company excluding all other parties.

Offline Systems. There are various LLM systems that can be downloaded and run on local computers that do not connect back to the Internet. This option does provide the most protection for your company data, however; it is not a viable option for smaller companies due to the high technical requirements for installation and maintenance.

Sensitive Data. If you are a company that has PCI, HIPAA, or PHI security considerations, you need to be extremely cautious with how you handle inputs into an LLM. You either need to be able to completely scrub the source data to obfuscate PHI or you need to use an offline or private model. Additionally, you should seek a HIPAA Business Associate Agreement with the vendor to protect any PHI.

Employee Awareness. The education of your employees, raising awareness of trade secrets, and protecting sensitive information is key to protecting information from leaking into public view. Companies should consider adding specific language to their existing Acceptable Use Policy and updating employee handbooks to reflect the use of these tools.

AI tools are improving at phenomenal rates. They can artfully develop written and graphical content, but caution should be exercised when using content generated by large language models.

It is possible for systems like ChatGPT to produce output that may not be 100 percent accurate. There is a phenomenon with these systems called AI hallucination. An AI hallucination is when the AI generates incorrect information, but presents it as fact, and might even cite a made-up source3 . This may happen if the model does not understand the prompt, or it does not contain the required information. There are many tactics you can employ to reduce these hallucinations, such as rephrasing the prompts and limiting possible outcomes by framing the prompts.

Always review the output of the LLM, especially if those outputs may be used within your company for other uses. You may want to consider the output of the AI to be a rough draft document, always verify facts being asserted, and rewrite the output in your own voice and style.

Generative AI systems have the potential to be a strong tool within business environments. If your company does not have a policy related to using tools like ChatGPT or Bard, create one and educate your users. It is important that your employees understand how to protect sensitive data, whether it is company secrets or regulated data.


1Cole, C. (2023, June 18). 11% of data employees paste into ChatGPT is confidential. Cyberhaven. https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt/.
2Mihalcik, C. (2023, March 24). ChatGPT Bug Exposed Some Subscribers’ Payment Info. CNET. https://www.cnet.com/tech/services-and-software/chatgpt-bug-exposed-some-subscribers-payment-info/.
3Brodkin, J. (2023, May 31). Federal judge: No AI in my courtroom unless a human verifies its accuracy. Ars Technica. https://arstechnica.com/tech-policy/2023/05/federal-judge-no-ai-in-my-courtroom-unless-a-human-verifies-its-accuracy/.

The Human Element of IT Security

by Brent Zomerlei

Information Technology (IT) security is a never-ending race to keep pace with defending against what the “Bad Guys” are trying to exploit. Your IT needs to be vigilant for all kinds of threats. Some of the most common include:

  • Malware – Software intended to cause havoc or harm systems, such as viruses.
  • Ransomware – Also known as “crypto locker” software, it is malware that intentionally locks an organization’s files and makes them useless until you pay an extortion fee to unlock them. It can be weeks or months from initial infection to the date that the files lock rendering your infected backups useless. (Note: We advise that you never pay a ransom to get your files unlocked as this just encourages more attacks globally.)
  • Faked (or “Spoofed”) Websites/Emails – Financial fraud like bogus invoices made to appear like internal emails asking Accounting to pay them to a false vendor – or emails and websites designed to fool you into thinking a legit action is needed to gather your information (a.k.a., “phishing”).

Most of us will be familiar with the standard ways to protect your organization from these attacks. The strongest protection will include layers of technology like firewalls, anti-virus solutions, or newer classes of comprehensive endpoint threat analysis. It is wise to include DNS filtering as well as email filter rules in this set of protection tools.

However, just relying on the technology will not completely protect you. Despite the technology available to protect our systems, alone it is not enough to stop all exploits. This is because the “Bad Guys” know how to manipulate human behavior and get people to do things that they might not normally do. Therefore, any mitigation strategy must account for the greatest vulnerability:

The Human Element.

It starts with creating skeptics at your workplace. Train your users to spot these scams and distrust all unsolicited incoming messaging regardless of the method. Scammers will use email (phishing), text messages, voice calls (vishing), even social media for their exploits; no communication method is safe. Here are common traits that could indicate a scam:

  • They arrive unexpectedly. A sudden request for payment for a company or service you are not familiar with.
  • They ask the receiver to do something unusual or outside of normal procedure.
  • Sense of urgency to act often to avoid penalties by acting immediately.

While I have been discussing this in the context of your organization, these traits also apply to scams that are directed at individuals in their homes. Your training regimen should make sure the people are using these skills in all aspects of their life, not just at work.

For training, I would recommend using a service. There are many companies out there that offer training for users, including some innovated firms that first evaluate your users and then provide an adapted training based on the results. The best practice in this area is to both evaluate your users regularly and to offer recurring training often. If your organization performs third party security audits (as is required of HIPAA covered entities and companies that must comply with SOC or SOX), you should also ask the vendor to test your users with phishing and/or vishing attempts.

The last thing I want to discuss is how to best protect your firm if, despite all the efforts above, your company becomes a victim of one of these scams.

Number One: Invest in cyber and fraud insurance to protect your company. Most companies are aware of this insurance and have an active policy. However, given the huge rise in ransomware and crypto-locker style attacks, these premiums are rising fast to account for the new threats. It would be wise to review this with your accounting or finance department.

Number Two: Make sure your Disaster Recovery (DR) plan is up to the job. Are your backups immutable, meaning that they cannot change after saving? Having true offline and offsite tape storage is one way to do this. However, this can be done with some backup software and appliances that use AWS or Azure storage for cloud backup. Testing and validating the restoration process must be something that your IT Department practices. This validation goes beyond the simple occasional file recovery that the IT department will manage. Has your IT Team tested their ability to perform a “bare metal” restore of major systems? If not, ask them to!

Number Three: Imbed a Communications Plan in your Disaster Recovery/Business Continuity Plan. It is important to plan how to communicate any breach internally as well as externally. Depending on the industry, there may be disclosure requirements that need to be followed. Engage with legal and communication departments during your planning and testing sessions.

Disaster Recovery and Business Continuity Planning is a subject that your IT Team must be able to articulate to your organization’s leadership, as well as have a planned process that is tested annually for recovering and continuing normal business operations. Ideally, they will have a playbook or some other sort of documented process that they can follow in case of a major incident.  While the IT Team cannot account for every type of contingency, you want to minimize the need to problem-solve during the incident.

By approaching IT Security as a collection of systems and understanding how scammers exploit ‘The Human Element’, you can build resilient and recoverable systems to protect your organization.