What does it really mean to build products with artificial intelligence (AI)? How much AI knowledge should a product manager have? And is it actually better to build or buy an AI platform?

The experts weighed in on the above questions and more in a panel discussion moderated by Alessandro Festa, then Senior Product Manager at SUSE, now Senior Product Manager at SmartCow.

Check out what they had to say and get the actionable insights you need on building innovative products with AI…

The panelists were:

  • Tanu Chellam, VP of Product at Seldon
  • Zia K. Mohammad, then Senior Product Manager at Chime, now Senior Product Manager at Amazon.
  • Jaime Espinosa, Product Leader at Cortex, Twitter
  • Bikalpa Neupane, then Technical Product Manager at IBM Watson, now Director of Advanced Technologies & Experimentation at Takeda.

What does it mean to build products with AI?


Whether or not we’re consumers or producers of AI in our products, we're building tools for machine learning. I've actually used and adopted AI into my products in the past, and as of early this year, I’m now on the vendor side of things.

Seldon is a company that builds MLOps tools. So if you’re on your machine learning journey and you’re applying it at your workplace, Seldon helps you with infrastructures such as serving, monitoring, and explainability.

We also have to be consuming and using AI in terms of how we build AI products. I'm constantly putting myself in the customers’ shoes of my own product and thinking, how would I use it? And if we can use it internally, that makes it better.


Echoing Tanu’s first point, I think back to when I was working in multilingual NLP. Building AI products really meant building with responsibility, and when you're building NLP software, you need to make sure that you're conveying the right meaning and you're doing that without bias. And it really enables individuals or provides them with a sense of responsibility.


After 15 years of experience as a data scientist, developer, and product manager (PM), I’d say it all comes down to ROI. If somehow the machine learning (ML) increases the value of your product or makes it so that investments into AI warrant the returns you're going to get, by all means, go for it.

It’s often most important to look for alternatives and slowly grow your ML, but once you have an ML-enabled or powered product, you need to figure out how to quantify the investments and impact, as well as how to calculate the ROI. Keeping an eye on that is one of the principal things that PMs do and bring to the table within the space.


Piggybacking on what Jaime just mentioned, it's all about ROI. As a PM, you tend to look at the value from different perspectives, and one of the indicators is looking at ROI. You look at software and maintenance costs because one of the key attributes of AI and building AI products is whether you're open to experimenting.

The AI model-building exercise doesn’t go in parallel with what general software development is because you tend to play with a lot of data, and the machine learning model comes with its own trade-off; you have to do a lot of data refresh and experimentation.

So I would emphasize more on experimental culture as a part of building products that have AI capability infused in them, along with ROI.

What do you think about the concept of building a product for AI vs using AI as part of a toolset as a product manager?


I'm a big fan of this emerging field called product science, in which you use statistical methods and ML in order to gain insights about your customers and markets. This is so you can maximize the impact you're going to be making in the market. So the use of AI and ML strategies is for product development, and I believe that extending market analysis into product design will be a goldmine.


At Seldon, we’re really on the serving, monitoring, and explainability parts of the ML lifecycle. And what we're trying to do is to see if we can do something we're calling ML for MLOps.

This is how we help customers iterate back to the training process, and what we can do to either be prescriptive or predictive to help them close the loop. Once they've trained a model, how does it help them to retrain? What are the things they need to change as their model or data changes? This is really exciting for me personally because not many players out in the field do this.

We're also collaborating across the industries with other companies that may or may not be competitors, but this is a field where we have to work together to improve. So that’s what I think about when it comes to improving products internally as vendors from an ML perspective.

How much knowledge should a product manager have around AI? And how do you handle any knowledge gaps?


I don't actually have a data science or machine learning background at all. I come from cybersecurity, design tech, and other fields that adopt ML rather than create ML. As such, I work with incredibly talented leaders in the ML space and rely on them pretty heavily to help me to understand the space.

PMs typically think about the vision for the future, but this space i's so research-driven, so I fill the gap by focusing on the customer and having more customer empathy.

I spend time talking to customers and understanding pain points so that solutions and brainstorming can be done collaboratively with the tech team and myself, rather than coming up with a blueprint for where we're going from just a product division perspective. I rely heavily on my engineering team, and props to them for taking me along on the journey.


Having a science background does help sometimes, but for the most part, you can give up on the idea that you're going to know everything within ML. Knowing enough to have a conversation with engineers and data scientists is really what's important so that they can bring you in as part of the process.

It's optional to go and get a data science degree in order to be an ML PM, but it is necessary to have enough breadth within the field to ask the right questions.


I’ve had academic training in informatics in the human and AI space and spend most of my time working on the fundamentals of human-computer interaction. This includes a lot of aspects in the PM space such as interview surveys, focus groups, and design thinking methodologies that are grounded on human-centered principles.

Having that AI background on being able to build that model and looking at whether that model is performant enough is important. What do our users care about? Do they really want an accurate model? Or are they looking at looking for trade-off features?

With a technical background, it helps to make the right call and the right assessment when it comes to machine learning, model building, and deployment. But sometimes it works against you as a PM, as your job is to focus on the vision and the roadmap.

It’s important not to work too much in isolation with PSDs, data scientists, and machine learning experts, as there still needs to be someone heading the product direction.

I sometimes like to take myself out of that zone and go out and talk to clients and customers. I schedule biweekly calls with some of our large enterprise clients and really try to get the customer voice into the machine learning equation.

At IBM Watson, I spend a lot of time working with the engineering team trying to understand and deploy the machine learning models and NLP models. But I also collaborate with IBM Research on academic papers and work with the experiment and design. It really comes down to the depth and breadth of the role itself in the organization that you fit in.


I think it comes down to a willingness to learn as well. Being a PM means that you should have a genuine curiosity about your product. And if that product happens to be AI, it should be your prerogative to go ahead and learn more about it. Talk to your customers as well as your internal teams to do that.


I just want to add an exception in there. If your customers are data scientists or they are deeply technical and they're trying to explain their pain points to you, you need enough of a technical background to understand their language and what they’re trying to tell you.

What are your thoughts on developing AI as a platform? And is it better to build or buy?


In terms of developing AI as a platform, I think it's pretty hard to do because each company's needs are different. And I think that's what the second part of the question is talking about with that stage of maturity, not just in terms of ML, but also the size and depth of the company.

From my perspective, it depends on how high-tech your product is and how much data you can source from it. For example, if it’s HR software that’s becoming increasingly technical in terms of using AI to sort through resumes and CVs, I would say it doesn't matter about the stage of the company because you’ll have lots of data to parse through.

However, if it's a platform for building a car, you have to be a bit more mature and have sold many cars before you have data that you can train your models on. So it depends more on the types of products that you're selling rather than the stage of the company.

In regards to build versus buy, I would say as a vendor company it's a good idea to see what’s out there before you decide to build. Building is really difficult and is a problem for many of us in this field.

Instead of building everything, reinvent the wheel in-house. The question is almost: ‘should I use Gmail to do email?’ Yes, probably, rather than building your own. So I would definitely look outside to buy before you decide that it doesn't meet your needs and you need to build it in-house.


This is something I’ve spent a lot of time researching as part of the whole product science aspect of things. Regarding the size of the organization, it doesn’t matter much in my experience, it's the team that really matters.

In terms of how mature your products are, I think that's very important. I previously released products thinking they were definitely something that people needed because they were optimization products, and they failed because I didn’t take into account the level of maturity that my customers' products were at. So the maturity of the stage and how much you've solidified the solution you have are both important.

With build versus buy, I've noticed that folks who build are struggling to keep up with changes within the market. You can’t keep up with academia and industry while you're trying to build your own solution. There are lots of opportunities to build the connective tissue of things that you buy, and it’s within that connective tissue where there's a lot of value.

It’s how you put it all together, but building the individual pieces themselves could prove challenging to maintain.


The teams that work on AI do have to evaluate build versus buy. And I think it really comes down to the question of - what is it that you're focusing on? As a PM, you have to make decisions on the scope of your organization and what you’re going to focus on.

If your company is AI and they have a research organization, they're closely aligned with academia, and they have the resources to do it, then maybe building makes sense. Otherwise, looking at vendors and using your PM skills to go through that process of evaluating and building the connective tissue between them is probably something that changes when you're a consumer of AI versus a producer.


You can definitely build your own machine learning models with an NLP solution, but the question is, can you sustain one? One of the pieces of advice that I give out to people is to build AI products if it truly matters to your business.

It's easy to define a problem, collect a data set, and build a solution, but at the end of the day, you might realize that the solution doesn’t work, or people don't care about it. It's not doing justice to your business. So how do you make the decision on whether to buy or build your own NLP solution?

With NLP, I had to make a decision about whether to have a full-stack team that could comprehend the linguistic challenges associated with NLP. Human language is hard and ambiguous, and so many things can influence your variables when you build a machine learning model. It's also not that easy to keep up with the latest and greatest quality metrics out there.

If you look at some of the greatest NLP solutions out there such as Microsoft, Google, Amazon, and IBM, they have state-of-the-art machine learning models and transformers that you can easily borrow.

You might have to sacrifice some form of customizability, and then really look for that domain fit. But when you use the third-party vendor solution, you’re also trying not to think too much about the data infrastructure and the cost. Running a model and POC is a fairly trivial task, but scaling and deployment comes with a whole set of challenges.

When you work on the machine learning model and in the NLP space itself, you also have to pay particular attention to things like explainability, the black box approach, and the fairness and bias components in conjunction with things like GDPR, PII, and third party regulations depending on different industry verticals.

There are a lot of moving things on the plate, so if you can, go out there and find one of the open-source enterprise solutions and see if that works for you. And then make a decision on whether it’s worth the ROI.


I agree with that last point. When it comes to regulations, compliances, and the certifications that people need to get, I think that's something that's sometimes overlooked. I remember when I worked with B2B companies, they would want to go and get an open-source data set and build a tool straight away when they hadn’t thought about the regulations they needed to meet to do all of these things.