Chirag Dayani, Senior Product Manager at Microsoft, gave this presentation at the Product-Led Summit.
My name is Chirag Dayani, and I'm a Senior Product Manager at Microsoft. I work in AI and ML innovation in the identity and access management space, which is part of the Microsoft Security division.
I’ve worked for over 9+ years. I started my career as a software engineer, then transitioned to consulting, and then finally into product management. And I've worked with companies like Deloitte, Accenture, ServiceNow, and now Microsoft.
Today, we'll be talking about how we can master AI and ML in product-led growth. And we've been hearing more about data, metrics, and how we can trust the data. So we'll be covering more of these topics today.
- AI is everywhere
- AI vs. machine learning vs. deep learning
- The application of AI in self-driving cars
- Generative AI
- The importance of responsible AI
- What’s it like being a PM?
- The required skill set for an AI PM
- Is ML the right solution to my problem?
- The ML product lifecycle
- Key takeaways
AI is everywhere
Every day, day in and day out, we're using AI in some way or another. The AI could be from a recommendation source on Netflix, Spotify, or YouTube, or any voice assistant platform like Google Home or Amazon Alexa. Or if you're ordering something from Amazon, your order’s being fulfilled by Amazon's robots, which are actually using image recognition to get your order fulfilled.
In some situations, you might not even know that you're using AI, like when you’re doing a transaction with your bank and you get a notification that says, ‘Hey, did you just make this transaction?’ That's where AI’s doing the work on the back end using a fraud detection algorithm.
And how can we forget about chatbots? We all talk to customer support about issues with our bank accounts or our e-commerce orders. We use chatbots day in and day out, which are also based on a supervised machine learning algorithm on the back end.
One of my favorite examples is self-driving cars. We’re using AI very heavily by incorporating cameras and computer vision to come up with the areas where you can drive the car with less pain, and then go to that space where you can choose a fully self-driving car option.
AI vs. machine learning vs. deep learning
I’ll now talk about the definitions of AI and ML so that we know exactly what these mean.
AI is the ability to mimic human intelligence. And we're leveraging AI to use the previous amount of data that we have to also improvise how we can predict the future.
Machine learning and deep learning are subsets of AI. Machine learning is explicitly about how we can apply AI and create some kind of mechanism to automatically learn and improve on our experiences. Deep learning is a way we can apply the machine learning concept and convert that into any kind of training model or use those complex algorithms.
The application of AI in self-driving cars
Now, I’ll provide an example of an application of AI in self-driving cars. I’ve chosen an example of Mitsubishi Electric Car Labs; they were trying to accomplish how we can explain the applicability of AI in self-driving cars.
They’re trying to create a system where you not only get the navigation of any car, but you also understand how data can be used to give you an instruction, almost as if a co-passenger is sitting next to you.
If you look at the first image below, we’re tracing a lot of data by looking at the image and using some image recognition. There's a car, there's a tree, there's a traffic signal which is approaching, there's a building, and there are cars that are at the intersection, so it detects that this is an intersection that’s approaching.
If you look at the second image below, it tells you which direction the car’s going in. We're using some physics concepts, like using vectors, to understand the speed and the velocity at which the car is moving in front of you, so you can get that guided experience. For example, ‘Take a left turn behind a black car,’ or, ‘Take a left turn where there’s a billboard on a building which is on the left side.’
It may also caution you. For example, ‘While taking a left turn, watch out for the incoming bus on the other side of the road.’
In the third image below, you can see how we’re labeling the data, and how we’re applying everything we’re collecting. We’re giving that a label in the form of how we can detect that object.
It detects that there's a person walking, and there’s a bike which is coming on the other side. It can even give specific details about objects, for example, that it's a blue bike. So you can go very deep into it by looking at the image and trying to understand what that means. And this is more about how you can train your model using a large corpus of data so that this machine learning can give the right outputs.
Generative AI
Gartner defined AI as having four main categories and defined a hype cycle for AI. They define the four categories as data-centric AI, model-centric AI, application-centric AI, and human-centric AI.
In this cycle, they said that we’re at the inflection point of the AI phase, where we’re at the peak of inflated expectations. And in that, we’re now looking at generative AI and responsible AI as the most up-to-date and peak hype cycle categories.
Generative AI has become very popular, with OpenAI recently releasing ChatGPT. It processes huge amounts of data, which now is also being used on the back end. They're now using GPT-4, which was recently launched this year.
They're using the capabilities of large language models, which is at the next level of how and where you can use a chatbot. With chatbots, you're using supervised machine learning, but you're training a corpus of data to provide some labels and get some curated responses on top of it.
Whereas in large language models, you're actually giving contextual summaries and responses and using GPT-4 to convert your text into images and more text.
So it's using huge amounts of data, and that data processing is happening on the back end. But it's generating a prediction of what you're going to ask and using that information to look at how other users may have asked that question or how the entire internet space is gathering that information.
And this is a great example of how ChatGPT reached 1 million users in just five days. Their entire concept was based on AI, compared to other big tech competitors. Obviously, at that point, they didn’t have AI as the skill they were focusing on, they just built the traditional product to come up with that concept. It took years for them to come up with this.
The importance of responsible AI
AI needs to be trusted. Humans want AI to have the right information and provide the information in a way that can be understandable to them.
Some research was recently done by BCG and MIT, and through that, they realized there are some false responses that are being given by AI, and so we need to build some kind of responsible AI framework.
A responsible AI framework is a six-point framework that prompts us to think about if we’re treating the data fairly and considering every aspect of it. Are we getting inclusive information from different ends or not? Are we treating the data with more respect and safety so that when we’re capturing the data, we're giving it the right output?
Are we using it in a way that's secure and private so that you don't have to worry about your data leakage? And is the data understandable?
There's a huge issue with data being fed into the system and not having the right output. So maybe bad data leads to some bad output.
And can users rely on the data? Is it accountable or not?
BCG and MIT realized that by using this responsible AI framework, companies can reduce AI system failures over a certain period of time. And 28% of companies saw failures decrease over this period of time. So when you're building your AI systems, think about responsible AI as the capabilities to consider.
You’ve probably seen a lot of press from the last few weeks or months about ChatGPT. Frankly, it’s creating news all over the place. And people are trying to understand the validity and if they can trust the data by having ChatGPT go through some competitive exams.
ChatGPT’s doing great, but there are some nuances around it. It's not only giving that information to assist humans to get to the stage of perfection, but it's also impacting human knowledge.
Universities and schools are concerned that students may not end up having the right education. Or they might not even rely on their own thoughts and knowledge, they’ll just go straight to ChatGPT for answers. So government agencies and schools are now implementing the responsible AI framework and seeing how they can restrict AI for certain categories of users.
What’s it like being a PM?
Now, let's switch gears and talk about what it’s like being a product manager or PM. Being a PM is hard, but it’s a great job.
We have different definitions of a PM. Some people say it's the CEO of a product, and some people say it's something completely different.
PMs who work in a high capacity work also with cross-functional teams, from sales, marketing, UX designers, software engineers, legal, customer support, and whatnot. And they have to get buy-in from different teams and get that information aligned in one place.
But if you add more complexity on top of that, when you have more data and more research going on, you work with more people, and you work with ML and data engineers, that's when the magic starts.
Below is a comparison of how traditional product development is different from machine learning product development. You’ll see how the success criteria may be different because in ML products, we're looking at more of the performance metrics and more at accuracy, precision, and recall. That's what the metrics are all about. Are you providing the right coverage? Or are you getting accurate information?
With a regular product, you're looking at the usage and adoption frameworks.
In development, you're also working with data scientists and machine learning engineers in addition to the regular software developers. And in dev progress, there may be some situations where the data leads to information that may block you at some point.
So it may be a very exploratory, non-incremental kind of space because, at some point, there are phases of how you capture the data. You analyze the data with the data scientists, meaning you're doing the pattern analysis of that. And then you're working with the engineers to productize it.
There may be some challenges in how you want to use that data or scale it further because there are some edge cases that always pop up, and you need to be prepared for it.
And then the quality depends not only on the software code but also on the data, which is the key aspect of an ML solution.
And if you combine all of this, you‘ll see how hard it is to estimate a dev timeline there.
The required skill set for an AI PM
So now we know what a PM does and we’ve been doing the work, from market research to setting the product vision, strategy, and roadmap. But in addition to that, an AI product manager needs to have some extra skills.

For example, having the right problem definition. This is because when we’re doing this kind of data-driven solution, we always get lost and think about if it’s the right solution. But we're not looking at whether it’s the right problem that we’re solving.
So we need to be very customer-centric. We need to have the ability to read the data, understand the data, and see if that data makes sense. Are we going to make references out of that data or not?
And because it involves a lot of data and we’re looking at certain data for a customer to analyze the pattern, when you're scaling that further, there may be some new findings. And then, you need to understand if you’re able to manage the data better or not.
In some situations, it might not even work out. So do you have some fallback plan? Because this product isn’t going to work at all. Do I have a fallback plan where I can get it done using regular software development work and maybe have some value versus no value? Those kinds of skills are really necessary as an AI PM.
Is ML the right solution to my problem?
So do we need an AI/ML solution or not?
I want you all to think about some evaluation criteria. Do I actually need AI/ML, or am I just using it for the sake of it because I've been talking to a lot of customers, and they’re very passionate about getting AI and ML in their product capabilities?
We shouldn’t do AI just for the sake of it. It should be meaningful, and it should be value-added because it's a huge investment.
So first, I’d ask, “Do I have the right team or the capabilities to work on that?” And then, “Do I have the right data? And will it be continuously available?” Because bad data means bad output. Good data means good output and continuous data is needed for training those models over a period of time.
Am I going to get any incremental value from this ML solution? Am I going to get X times gains over the regular solution or not? And does the solution require any personalization? Am I going to impact a user's journey in a particular way that they're looking at that data for themselves, and it's not generic information I'm showing?
And are my users okay with not knowing the details of how the feature functions? Oftentimes, ML solutions are based on complex mathematical and statistical models which may be hard to explain to a user. And we can only do so much to explain that in layman's terms so that customers start trusting our solution.
Trust me, this is the hardest part. When we were deploying an ML solution, a lot of customers wanted to go and validate each and every data aspect of it, and they cross-checked if the ML was getting the right solution or not. If they start relying on this, they might even forget that this area exists and the bots are doing that work on their own. It's very important to know if they're okay with not knowing everything.
And will I have a feedback system that’ll keep the model fresh? You may have seen thumbs up and thumbs down buttons on ChatGPT or similar solutions, where if you can see you're getting the wrong responses, you can provide feedback and say, ‘Hey, this isn’t working well.’
If there are too many thumbs down, then it's at an alarming stage. And at that point, the engineers who are working on this may need to start thinking about what went wrong. Is it something around the data? Or are there any external factors that led to this? So error handling may be very accurate, which is needed.
I’d advise you to think about these questions, and if 80% of your responses are ‘yes’ to them, start building an ML solution because it's a heavy investment of time and effort. There are situations where it may take more than half a year or a year to come up with the entire solution from end to end when you're building a machine learning solution.
The ML product lifecycle
So now that we know what AI is, how we can build it, where we are in the hype cycle, and when we can build an ML solution after doing that evaluation, we want to think about the lifecycle of an ML solution.
Remember that customers don't care about your solution or what data you have. They only care about the problems they have. They want a solution to the problem, which you can accomplish by using this data and creating it in a way that they can have a seamless user experience. And remember to make the value clear and easy to understand for them.
Problem definition
The first step in the lifecycle is problem definition. Even in the world of machine learning, understanding the users and the pain points is incredibly important.
In the world of machine learning, the only other thing you need to think about is if you’re using the right ML in relation to the problem you have. Are we using ML just for the sake of it? Or do we have an actual solution available there?
There may be some cases where ML naturally fits into the solution. But there may be other situations where ML may not be the right fit. And we’re looking at time versus money versus experience.
Lastly, it’s about how the user experience will change. For example, if I'm building a self-driving car, I don't want to change the experience entirely. I don't want to change the basic concepts of the accelerator and brakes, I just want to enhance the user's experience in a way that the machine can do the work on its own and humans can just moderate it to the point they can trust the system.
Success metrics
Next comes the success metrics. Defining metrics is core to any product, not just machine learning products. PMs should be thinking about which are the right metrics to work on and what we’re optimizing here.
Over-engineering may lead to failures at some point. But in a machine learning solution, we should think about metrics like precision and recall. Precision is more about the accuracy of a solution and the accuracy of the recommendations we’re giving. And recall is more about the coverage. Am I covering the entire problem space and providing all the solutions there?
This is very different from a traditional product model where you think about usage and adoption mode. So, in this case, you need to think about the trade-offs and whether you want to start with precision or recall.
It may be difficult to define those metrics for an ML solution because there are too many unknowns. But having that start makes a difference because then, if you make those trade-offs ahead of time, you can define the product goal in a better manner, and the data scientists and ML engineers can work towards it in a more efficient manner.
So, getting buy-in from leadership may be very important in this kind of scenario.
Data preparation
I can’t emphasize enough how important the data is in this space. It’s gold to the solution.
I usually use a four-point framework when I'm working with data. The first point is ‘discover.’ Find the right data so that you can identify the right patterns so data scientists can work on that.
Again, be very customer-centric. In this lifecycle, we're doing so much in terms of analyzing the data, gathering the data, and then working more towards it, so don't lose track by not focusing on the customers. Look at what problems we’re solving for them, and work backward.
And then proof. Prove the validity of your data and the validity of a model by using customer validations. Go to the customers, use their data, get their consent to use that data, and tell them you’re going to come back with the right solution which may impact their experience. Then see how they respond to that because everyone likes to improvise their experience. They want to get to the next level of their own cycle.
Then expand that data so that you can build the models in a way to generalize it for the regular audience. And there may be some situations where you see that the quality of the data may be bad or the source of the data may be missing.
But in this situation, it's very specific in terms of which organization is looking at it and which problem you’re solving, so you need to think about how you can use minimal data to create maximum value there.
As a PM, you may have to drag this process from end to end, understand the user needs, and understand the right data to use for this solution.
Train models
And then, finally, you're coming to the meat of the process, where you're training the model to meet your product goals.
You have a data scientist working on the data, you have the right patterns which can be developed based on that, and then they can recommend the right solution that you can work on. And then you use all the scrappy ways to use that data to validate that with customers, using Power BI or a similar tool, before even going to the next phase of productizing it.
Think about some edge-case scenarios. As a PM, you're talking with customers, and you may come across some edge scenarios that you should let your data scientists or engineers know about so that you can brainstorm those scenarios together and they can train the model accordingly.
And then set the right user expectations. Setting expectations for the user is so important because they want to know what data they’re looking at and what information’s behind it.
Deploy and manage the model
And then last is deploying and managing the model. Once you've trained the model for a specific use case, you want to deploy it and scale it further. And there’s a saying that you only spend 1% of your time thinking about the machine learning solution and 99% plumbing, which means you're doing so much work in terms of plumbing the data, getting the right model, and then scaling it further.
But also think about the fact that you want to monitor this information end to end, so if there are any issues on the data side, you can have those triggers set up in your system so that you're able to handle those errors easily, give the right experience to customers, and show that trusted solution.
Also, be very transparent about it. You must have seen in ChatGPT or even Google Bard that it shows the process and how it's calculating the right prompt. It also has a disclaimer that the information may not be as accurate for major use.

Pardon my obsession with cars, but now that we know about how to build the ML model, this is how you can transition from having a regular car to a self-driving car, using all the computer vision, image recognition, and deep learning capabilities, and transforming your product to the next stage.
Key takeaways
Being an AI PM is about being very data and experiment-driven, and there's no one-size-fits-all. There may be some new things that come up in every situation, so think about how you want to tackle those situations.
Secondly, be mindful before using any ML solution. Do you actually need an ML solution? You should evaluate if ML is the right solution for a particular problem. Are there any models or mathematical situations that you can develop on top of the data you have?
Be very customer-centric because you may get lost along the way as you're thinking about a problem. And you may start with customers, but then as you're evaluating data and dealing with all the challenges, you may lose track of what the customers actually asked for. So it may be very difficult.
Build a model for a specific use case, and have incremental value-add on top of that. This means that you should build a specific model and then see how you can reuse that model and apply that particular data science model or some tweaks to it and use it in more features.
That’ll be so powerful and impactful because you’ll still see those incremental gains in that feature, but you've done it with so much less effort. And that effort of building that model from scratch has led to two or three major wins in that space. And that's how you can enable AI and ML for product-led growth.
