Lucy Huang, Product Manager at FullStory, gave this presentation at the Product-Led Festival.
I’m Lucy Huang, and I'm a Product Manager at FullStory, your go-to spot for understanding your digital experience.
On a personal note, my career has been focused a lot on health and safety, privacy, integrity, risk, you name it. Basically, I spend a lot of my time thinking about all the things that could go wrong, prioritizing what to work on first, and shaping the policies and procedures within organizations to manage that risk.
Here's a primer on what we're going to cover today: managing risks and AI technologies. All opinions are my own, except when I pull in some headlines to highlight what's going on in the industry. It's a very fast, changing place.
It's probably been hard not to notice all the tremendous advances in AI recently, especially machine learning and generative models.
Today, we'll talk about those advances in machine learning, and what AI governance frameworks you can apply to manage your risk, user privacy, and ethics.
So to kick it off, here are two anti-goals that I don't want you to come away with from this.
Number one, I'm not here to fearmonger, but that doesn't mean there isn’t a very real risk that we’re responsible for as shapers of product and messaging this to the market and our customers.
Secondly, I'm not a machine learning engineer or lawyer, but I’m here to talk about the risks of AI, and that should show that even you can start to contribute to your organization's policy and procedures governing AI.
- The emergence of generative models
- Why generative AI is like riding in a car
- The risks and challenges associated with generative AI
- The AI risk management framework
- So how will these actions translate?
- Prioritize trust in product management for AI
The emergence of generative models
We'll start with a little bit of history. In 2022, we actually saw a really tremendous advance or spurt in generative models, and along with that, there was surprisingly open distribution and access.
As a high-level primer on generative models, these are different from discriminative models that were more widely used in previous aspects of data science.
Discriminative models are a class of supervised machine learning models that make predictions by estimating conditional probability. We won't get into the math of it too much, but TL;DR: they can't generate new samples, it's more of a ‘if this, then that’ logic, used for classification tasks where we might use x features to classify to a particular class y.
One example is email spam. That might be a simple yes or no label for this Email Inspector that you're building.
Now we've moved on to the era of more generative models, which are a class of algorithms that make predictions by modeling out joint distributions. There are a lot more steps involved here to take the probability of the class and the estimated distributions. But again, the TL;DR: they take input training samples and learn a model that represents that distribution.
So again, taking that email spam example, generative models can actually be used over time to generate emails that could even fool the Email Inspector. So the twist is that over time, the generative model can gradually fool a discriminator or that email yes or no spam inspector we've talked about.
And that's what we're seeing today in more recent advancements. If you take that specific flavor of generative models, we have large language models (LLM) that use deep learning in neural networks, such as ChatGPT.
We also have text-to-image models such as DALL.E that incorporate computer vision and natural processing. We've even seen text-to-video projects come out from Meta, which takes it a little bit further than text-to-image.
There's a lot of really interesting technology out there that I’d urge you to try out.
Why generative AI is like riding in a car
Now we'll go into one of the initial risks. One of the risks I'm going to talk about is copyright.
Earlier, I mentioned that the distribution of these technologies was surprisingly open. We'll take the analogy of cars first of all because I'm assuming that everyone has driven or ridden in a car at some point in their life.
Everyone has to get a driver's license to make sure that they're qualified to drive. You have to understand the policies and procedures of the road. There are also different types of licenses to show that you have knowledge of a specific vehicle.
In addition, we have seatbelts and speed limits to protect ourselves and others from harm. There's also signage on the road, so that provides notice and transparency.
And with the democratization of generative AI, we're actually giving these cars to a wider audience than ever before. But here, the driver's test is optional.
For example, ChatGPT. How many folks have tested out open beta there? If you're familiar with Midjourney, another text-to-image service, they're actually available via a Discord server bot that has millions of users.
Personally, I'm all for the wider spread use of AI and access by different audiences, but we need to recognize that there are guidelines required. Where are the seat belts and speed limits? And who's volunteering to use them for generative AI?
There aren't a clear set of guidelines for the purposes of generative AI today, how it should be used, and how it can be measured. And honestly, this isn't much of a surprise given that the US is already one of the largest countries without significant federal data privacy laws.
To take it back a little bit, most organizations found that the onset of GDPR actually helped them build a clearer, more distinct organization, organized around managing consumer transparency and privacy.
As frustrating as it probably is for us to see all those cookie banners today, we've still raised the tide for all ships and humans on them with that set of standards.
A Deloitte survey found that 44% of consumers felt that organizations cared more about their privacy after GDPR came to fruition. And even now, Europe is leading the way with the first proposed AI Act, which is the first set of regulatory frameworks for AI governance.
So I think today we're seeing that folks are being given cars without a seatbelt and being told to drive and explore generative AI. With this great power comes great responsibility, and you and your organization should include that within your AI and product strategy.
The risks and challenges associated with generative AI
Now we'll go into the copyright piece that I touched on a little bit earlier.
Here is a headline from the New York Times:
‘An AI-Generated Picture Won an Art Prize. Artists aren't happy.’
So here, a digital artist actually entered into an art contest in Colorado, in the Digital Arts category, came first place, and won $300. They actually used Midjourney, which I talked about before. It's a service that's available on Discord that provides text-to-image renderings.