Productivity

How to Create a Generative AI Policy: Addressing Confidentiality, Accuracy, and Legal Risks

Blog Image
Published on
September 23, 2024

ChatGPT is fun.

But there are trade-offs, particularly as it pertains to confidentiality, privacy and cybersecurity. As an individual, you have to assess those risks for yourself. As a business, you have to account for your employees, their awareness and the company’s risk tolerance.

Basically, you need an approach, a process and a policy. You might already have data classification and risk assessment policies, but those won’t be enough because of vendor risk.

First, let’s make an important distinction between the consumer tools released by OpenAI, Bing, Google and soon many others from the APIs these same companies are releasing. I won’t get too technical but using the APIs is just not as easy as using the consumer tools, such as ChatGPT, Bard, BingChat and DALL-E, etc. So, you could have multiple policies!

Second, there are different terms of use depending on which one you use, consumer tool or API. For example, the terms of use for ChatGPT state that OpenAI (the company behind ChatGPT) may use your inputs for training purposes. The terms for the APIs, however, say that they do not use your inputs for training.

This distinction makes it much more challenging to write a comprehensive policy because it requires so much explanation.

As we all know, people don’t like to read long policies.

Finally, you cannot look internal without looking external. Are your vendors using Generative AI? In this guide, I will walk through how you should frame your Generative AI policy. For each policy point I make below, I will offer corresponding questions you should be asking your vendors.

  1. The Confidentiality Problem. If you want to write a policy on Generative AI, you have to address the elephant in the room upfront and first. Do you want to ban all use? Or are you going to allow use but ban employees from inputting any confidential and/ or proprietary information? Is there any room for exception? Simply banning employees from inputting confidential or proprietary information may be easiest, both in terms of administration as well as comprehension. You should use your data classification policy, which likely has 3-4 definitions of what data is considered “public” and what data is considered “confidential” or “restricted” or “internal use only.” If so, it should be fairly easy to cross-reference that policy in this all-out ban. For example, “employees may not use Generative AI to input any confidential, restricted or internal use only data. See our Data Classification policy.” Tip: don’t forget to also cross reference your Acceptable Use policy.

For your vendors, you need to ask them if they have a Generative AI policy and if so, does it prevent employees from inputting confidential or proprietary information? Are there any exceptions to the rule?

  1. The Accuracy Problem. If you have read my posts on LinkedIn then you know I often talk about “garbage in, garbage out” with respect to training A.I. I usually like to spin this in a more positive direction and will use the “good data in, good data out” phrase instead. I do that because we use A.I. at ClearOPS. When it comes to large language models, however, I think the GIGO phrase is more appropriate because OpenAI and other companies took the approach to sweep wide and then filter out the bad stuff. At ClearOPS we took the opposite approach, which was to be very selective in what we collected. I personally think this is a better way, but that’s my bias. Anyway, their approach means that the output is not accurate. The A.I. combines information it scraped in unexpected, sometimes laughable, sometimes hurtful, ways. Plus, the current amount of disinformation on the internet is exacerbated by the tool itself because it “learns” to make stuff up.
  2. So your policy needs to address this as well. Assuming you told your employees they could use Generative AI for non-confidential inputs, then you have to also tell them that any output must be checked and verified for accuracy. This requirement may be particularly important if the output is code.

For your vendors, you need to ask them how they are verifying the output of any Generative AI used for creating code that is then put into production. You likely already ask them about their code review process and whether they follow OWASP Top 10. If not, now is a good time to add that to your vendor questionnaire.

  1. The Legality Problem. The Copyright Office, so far, has determined that an output by A.I. is not copyrightable. So, if you are allowing the use of Generative AI, your policy should include a provision about this important topic. There are a couple of examples I can give about why this is important. If the company is subject to a transaction, such as M&A or a financing, and the due diligence discovers that your code is not copyrightable, then you may suffer a loss to the value of the business or worse (assuming the code is an important part of the consideration). Another problem would be that someone else might sue your company for copyright infringement of their work that was the output you used. Again, this is only a problem if you decide to allow the use of Generative AI in your policy, but if you do, you may want to consider prohibiting the use of output for commercial purposes.

For your vendors, you have already asked if they have a Generative AI policy, which is most important to assess this risk. I think the biggest risk your vendors pose to you here is if you have asked them to do work for you that uses Generative AI and you need ownership to those deliverables. Likely, this is a contract requirement over a vendor due diligence question.

In putting together your policy, there are opportunities here for providing examples, FAQs and checks and balances. For example, if you permit the use of Generative AI in your business, perhaps you have a carve-out where confidential information may be used as input if the corresponding Generative AI license terms assert that the data is not used. Or maybe you determine that, for example, you trust Microsoft and you have read their terms, so they are the exception to the rule (I am not expressing an opinion here).

A tangent to this confidentiality problem is also the security issue. OpenAI relies on open source code, which caused them to already suffer a security incident. Therefore, you may want to ask your vendors which Generative AI providers are permitted. A note about that is that most companies rely on open source software these days so while that incident was bad, it is not uncommon. It is their response that matters and, as you can see, they did show that they have an incident response process, which is likely already a question on your security questionnaire. Course, adding questions about the use of open source and how the code is checked makes a lot of sense given this incident.

Maybe they used ChatGPT ;)

Finally, all policies have a consequences provision. Typically, it allows the company to warn or even fire the employee. I would submit that that is harsh in this context and to think outside the box. For example, if an employee inputs confidential information into ChatGPT, let them know about OpenAI’s opt out process so they can fix it. Or, if you want to check how much of the output is generated by the AI for purposes of copyright protection, check out GPTZero.me. I always think empowering employees is better than scaring them.

My parting words are to keep in mind that, as with any new technology, we want to keep these policies short, concise, easy to read and not too scary that they stymie innovation.

I, for one, love ChatGPT and we are leveraging the technology at ClearOPS. Wouldn’t it make security questionnaires and vendor management, dare I say, fun?

You’re the best,

Caroline

P.S. ClearOPS is a tech startup company on a mission to bridge the gap between privacy and security empowering all businesses to build responsibly. We are intent on building an ethical A.I. company. If you support our mission, please subscribe and share.

Featured Blog

We are constantly writing new content. Check back often or join our newsletter!

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.

Take Control of Your AI Today: Contact Us!

Don't lose control of your proprietary data because you failed to implement governance.