Hundreds of you attended our very first AI for PMs Summit and oh boy did we get a plethora of AI insights and machine learning… well… learnings from a whole host of experts.
We heard from product pros who unpacked everything there is to know about AI product management. From building AI products to using ML to solve user problems, the jam-packed day had it all.
We’ve narrowed down the insight-laden event into some bitesize takeaways, but if you want to relive the whole thing, grab yourself a PLA membership.
Takeaway #1: Managing the ML lifecycle is critical for AI PMs
A big theme throughout the Summit was that product managers are ultimately responsible for building products that fully leverage AI, not data scientists or engineers. But building an AI product isn’t the same as using AI to build a product.
A product manager for AI’s primary focus is to build, deliver, and manage AI products for the end-users. Alessandro Festa, Senior Product Manager at Suse, explored the stages of AI products, and what the responsibilities of PMs are - in particular, the importance of managing the ML lifecycle.
The ML lifecycle can be broken down into four steps:
Alessandro briefly explores each of these steps here and delves into where the PM should be involved and how.
“The PM for AI has to remember that while as a PM they will have to manage the product lifecycle (discover, validate, develop, launch, measure) they will have to match the process with a different lifecycle, the one related to Machine Learning (scope, experiment, train, predict).
“This is an iterative process with different timing, especially in the first phases. We may assume that 80% of the entire AI project is or should be dedicated to the data preparation and this is within the scope and experimentation phase. The PM for AI will have to manage a highly technical team balancing their development pace with the stakeholder expectations.
“Often the initial pace of the project is slower than expected and, if not correctly addressed, could lead to high frustration in the stakeholders. Finally, the PM for AI has to remember that as iterative as it is, the ML lifecycle of the "product" will still have ups and downs that they have to predict and manage accordingly.”
Alessandro Festa, Senior Product Manager at Suse
Takeaway #2: Bias needs to be addressed when building AI products
The discussions around the responsible use of AI and the responsible development of AI products have really picked up, but we are yet to see the operational use of a lot of these principles and frameworks put into production.
A lot of the massive improvements in AI solutions to critical problems is in part thanks to the massive amounts of data available. Thanks to the widespread application of AI, we are now seeing products that have a greater scale and impact, the ability to have stronger personalization, and the ability to do things faster with more accuracy. This is why it’s now so crucial to exercise awareness and responsibility around building AI products.
A lot of this comes down to addressing the biases in AI. Bruke Kifle, AI PM at Microsoft, outlined these specific biases in his talk:
- Pre-existing bias - Bias reflecting that of the organization responsible for determining requirements, as well as development and deployment.
- Technical bias - Bias that’s emerging as a result of technical constraints and/or decisions.
- Emergent bias - Bias emerging as a result of AI/ML learning on its own in the wild. The addition of new users and new types or sources of data can result in biases emerging in previously unforeseen ways.
Identifying the biases that exist in technology and how they play a role in sensitive use cases is hugely important going forwards. AI product managers need to consider the broader impact some AI-powered decision or recommendation systems can have on people’s lives. Product managers need to think about this impact on the individual, organizational, and societal levels.
For example, what types of physical harm or safety concerns are there? What are the impacts on financial performance? And what role do social media platforms play in destabilizing economies and governments?
“The biases we’ve defined and how they play a role in use cases at all levels is really a great framework to have as a PM when driving the design and development of products.”
Bruke Kifle, AI PM for Microsoft
The responsible AI principles to follow include:
- Transparency and explainability.
- Reliability and safety.
Takeaway #3: Machine learning can solve the right complex user problems through understanding data
Machine learning (ML) can be used to analyze data and drive better decision-making and outcomes. When it comes to solving user problems, it’s important to understand that data and the specific problems that need solving.
AI and ML shouldn’t be incorporated into solving user problems for the sake of it. Instead, PMs need to focus on the problems that would be difficult to solve with, for example, traditional programming. ML is a learning system and can adapt to trends as new data comes in, which means being able to distinguish between automation problems and learning problems.
ML models are being used to parse through large amounts of complex data and feedback to identify sentiments; like how users are feeling about emerging products, topics, and themes to help highlight actionable insights.
Good data is, of course, key to getting the most accurate results, which includes having clean, labeled and anonymized data - and having enough of it.
“Data is needed at every step of the ML pipeline. When you’re doing data collection you want to define all that data and which sources it’s going to come from. You might want to work with your IT admins and data science teams to figure this out. And then when it comes to data pre-processing, make sure it’s clean and formatted properly, and include data transformation as part of this discussion too.”
Megha Rastogi, Group Product Manager at Okta.
Takeaway #4: You need to know how AI/ML fits into your user’s lives
When it comes to prototyping AI and machine learning models, it’s all too easy to overlook things from a user’s perspective and to just focus on the hard technological problems. People have certain expectations about more deterministic technologies (e.g. those that follow simple rules and pathways) that no longer apply when they deal with humans, and now systems that include machine learning.
As with all work that product managers do, we need to understand “why” are we building this thing. Are we building the right thing? We do this by understanding how a technology will fit into people’s lives alongside other technologies.
Empathy was a huge point of discussion during the Summit, and Empathy Mapping for the Machine is a technique that can be utilized to help people understand what they expect these systems to do. Through roleplay, we can understand what would make the most sense for humans to interact with systems like this.
When prototyping for AI/ML you must understand how machine learning actually fits into the user's life in some way. How is it part of a larger system? And how does it fit into the ecosystem of all the different solutions that they need to interact with?
“I’ve found that using techniques to get at expectations between humans can be used for the times that machines intermediate those humans. In the end, it is still humans interacting with other humans through extra, machine-mediated, steps.”
Chris Butler, Assistant Vice President, Global Head of Product Operations at Cognizant
Takeaway #5: Getting started in machine learning means seeing where it fits in
ML can really fit in well in places where there’s huge amounts of data to sift through and potentially draw insights from manually. When the volume of data is so big, it’s very difficult to go over each data point. So this would be an ideal place to fit ML into the process and work together with data scientists to discover machine learning opportunities.
Product managers should focus on the requirements, on the user journey, and establishing the product-market-fit, making sure it's something users will like. It’s important to understand the problem and the user before you start thinking about the technology.
Once you have the requirements down along with the user journey, then you can go to the data scientists and they can dictate the technology that fits. This was a big theme of a lot of discussions throughout the event; the idea of knowing the requirements first and fitting the technology to those requirements.
When it comes to the level of knowledge a PM needs to get started in machine learning though, they don’t need to know as much as a data scientist. But it’s important to have a data-driven mindset and some basic knowledge of ML.
We’ve only scratched the surface of all the insights, content, and conversation that went down at the AI for PMs Summit - but thankfully you can catch up on all the action, plus grab yourself a bunch more OnDemand footage, templates, frameworks, and exclusive content, it’s ready and waiting for you in our membership plans.