If you haven’t already - check out part one here.
Building AI products
Here's a quick look back at the eight steps to building AI products,. These steps form the backbone of the process, guiding your idea from inception to real-world impact.
- Identify the problem
There are no alternatives to good old-fashioned user research
- Get the right data set
Machine learning needs data — lots of it!
- Fake it first
Building a machine learning model is expensive. Try to get success signals early on.
- Weigh the cost of getting it wrong
A wrong prediction can have consequences ranging from mild annoyance to the user to losing a customer forever.
- Build a safety net
Think about mitigating catastrophes — what happens if machine learning gives a wrong prediction?
- Build a feedback loop
This helps to gauge customer satisfaction and generates data for improving your machine-learning, model
For errors, technical glitches, and prediction accuracy
- Get creative
ML is a creative process, and product managers can add value
This article covers steps four through six. Let’s get right to it.
4. Weigh the cost of machine learning going wrong
Machine learning is not error-free, and developing without guardrails can have serious consequences. Consider for example, this malicious Twitter bot:
Or this Google Photos AI gone wrong.
Of course, we cannot conclude that all AI will go insane. At the same time, we need to acknowledge that there is a cost to getting it wrong with AI. Think, for example, an order management service that automatically detects if a customer wants to cancel their order.
The cost here is that the model might wrongly interpret the user’s intent as a cancellation, thereby having financial consequences for the user and the company.
Machine learning relies to a large extent on probabilities. This means that there is a chance that the model gives the wrong output. Product managers are responsible for anticipating and recognizing the consequences of a wrong prediction.
One great way to anticipate consequences like the ones above is to test, test, and test some more. Try to cover all scenarios that the AI might have to encounter. Understand what makes up the probabilities computed by the model. Think about the desired precision of the model (keep in mind, the more precise your model is, the lesser the cases it can cover).
Talk to your data scientists about what can potentially go wrong. Ask tough questions - it's the data scientist’s job to build a fairly accurate model, and it's the product manager’s job to understand the consequences of such a model.
In general, the cost of getting it wrong varies by use case. If you are building a recommender system to suggest similar products, where you previously had nothing, the worst outcome might be a low conversion on the recommendations you offer. One must then think about how to improve the model, but the consequences of getting it wrong are not catastrophic.
If AI is automating manual flows, a good estimate for cost is how many times an AI gets it wrong in comparison to humans. If for example, a human is right about an email being Spam 99% of the time, an AI Spam detector is worth the investment at a 99.1% precision.
It is important to understand these consequences in order to mitigate them and to train the model better for future scenarios.
5. Build safety nets for ML model management frameworks
Once all of the consequences of getting a prediction wrong are identified, relevant safety nets must be built to mitigate them. Safety nets can be intrinsic and extrinsic.
Intrinsic safety nets
These recognize the impossibilities that are fundamental to the nature of the product. For example, a user cannot cancel an order if there is no order made in the first place. So that email you got is definitely not talking about canceling an order, and the model has misclassified.
It’s a good idea to have a human agent look into this case. A useful activity is to map out the user journey for your product and identify the states that the user can go through. This helps weed out impossible predictions. Intrinsic safety nets are invisible to the user.
Extrinsic safety nets
Extrinsic safety nets are visible to the user. They can take the form of confirming user intent or double-checking the potential outcome.
LinkedIn, for example, has a model to detect the intent of a message and suggest replies to its users. It does not, however, assume a reply and automatically send it. Instead, it asks the user to pick from a list of potential replies.
Extrinsic safety nets for users are not a new concept. Think about every time your Windows 95 popped up this dialog box:
This system does not use AI but does take into account that erroneous actions can have consequences. Safety nets are ingrained in all products, and AI is not an exception.
6. Build a feedback loop
Setting up safety nets also helps gather much-needed feedback for the ML management framework.
Feedback loops help measure the impact of a model and can add to the general understanding of usability. In the context of an AI system, feedbacks are also important for a model to learn and become better. Feedbacks are an important data collection mechanism — they yield labelled datasets that can be directly plugged into the learning mechanism.
For Amazon’s recommendation module, the feedback loop is quite simple - does the guest click on the recommendations, and is this recommendation bought?
AirBnB uses a more direct approach for feedback collection.
Netflix uses a hybrid. It can understand based on how many of the recommendations you click and view, and also uses a thumbs up mechanism for explicitly logging preferences.
It must be noted that safety nets can often double down as feedback channels for refining the model. Safety nets by their nature are outside the ambit of the model’s prediction. They should be used, whenever possible, to label data and generate a stronger learning dataset.
I make AI sound scary, but responsible development while understanding consequences is essential to any product regardless of whether or not AI is involved.
Check out part three.