Productivity

How to Start an AI Governance Program

Blog Image
Published on
September 23, 2024

Introduction:

I'm sorry to tell you this, but you are starting your AI Governance program wrong. This is not just a catchy opening. It is actually true. Let's dive into it.

Congratulations! You Get to Handle AI Governance:

Let me guess, you used to handle only legal or privacy or cybersecurity for your organization. Then ChatGPT happened and the company looked around and pointed at you to take on this new challenge of "AI Governance." You may be ecstatic about this or you may not be able to wait until the company hires someone else to take on this full-time added to your already heavy workload. Either way, while you decide to take a step back and build out a program that includes a process and a bunch of policies, guess what your employees are doing?

Yup, you guessed it, they are using GenAI tools, like ChatGPT or Perplexity.ai.

It's Worse than You Think:

While your company was looking around for who to take charge of AI Governance, your employees were using tools that were already company-approved that now added GenAI capabilities.

What tools does your marketing department use? How about sales? I am going to guess that they use a lot of tools: a sales CRM, a marketing hub, etc. Most of those tool providers have launched GenAI features in the last year. So while you are trying to talk to management about what worries them in adopting GenAI, your employees are already doing it, and they are doing it with the company's permission.

And those vendors aren't rolling out those capabilities in the most data safe way. Did you see the controversy when LinkedIn updated its privacy settings so that users had to opt out of letting LinkedIn use their data for model training?

What's the Big Deal Anyway:

So why is it a problem to use data for model training and why should your business care that LinkedIn changed permissions for its users? The problem with using data for training, re-training or fine-tuning is that the model does not forget its training. Unlike a human, you can stuff it with more information, and you can tell it not to regurgitate the training data word for word, but it does not forget it.

Samsung learned this the hard way when its engineers uploaded code into the free version of ChatGPT only to discover their proprietary source code was forever available to anyone else in the public using ChatGPT. Yikes.

Imagine you have an employee on LinkedIn who violates your social media policy? You can ask LinkedIn to take the post down, but with the new setting, they have already put that post, along with the employee's name, into its model. You can't be sure that the "take-down" of the post will do anything to protect your brand. It is now captured into a very large dataset, forever.

Conclusion:

For the above stated reasons, it is why I recommend starting your AI Governance program with vendor management. I will cover how to build that vendor management process in the next article, but, please, run, don't walk to starting your due diligence on the use of AI by your current vendors.

Featured Blog

We are constantly writing new content. Check back often or join our newsletter!

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.

Take Control of Your AI Today: Contact Us!

Don't lose control of your proprietary data because you failed to implement governance.