My name’s Dan MacKenzie, I’m Head of Product at Altruistiq, and I’m going to explore why we even build AI products. When do we want to build them? And critically, when don't we want to build them?

I’ll be going through some specific examples of areas where AI has been applied well, and areas where it hasn't.

Let me begin by saying I'm not "anti-AI products" at all. I'm actually a huge proponent of some of the really exciting advances in this space. I'm very excited to have had a chance to work on some of the problems with the technologies in this broad toolkit that we've managed to apply.

However, I've also seen plenty of cases where AI, and I'm using that word quite broadly, wasn't necessarily the best tool for the job. In this article, I’ll be sharing some of the learnings from those instances.

I'll be covering:

Why are we even talking about this?

First, you probably want to know why I'm even going into this topic and what my experiences are with it. I’ll start with some brief background history, and use that to touch on a couple of different product areas we've worked on, and their relation to AI as a toolkit and a general subject area.

I spent the early part of my career working in Formula 1. I was doing a lot of simulation and parametric optimization, performance modeling, and all sorts of topics around that. It was really interesting to have a lot of data to focus on because as a team we produced vast amounts of data. There was a big challenge in modeling and finding out how we could then deliver insight from that data.

This is actually a really great example of a team that was very data literate, and that at the time didn't use AI. There were a lot of traditional toolkits used in terms of the analysis and various ways that analytics was used that really leveraged a lot of non-AI-based techniques at the time.

I understand from some colleagues there that this has changed over the last few years, as the technology has improved and developed. But I think that's a really nice use case of how a team or company can make good decisions about when is an appropriate time to bring in certain techniques and tools. It showed that you can get a really long way without trying to “AI everything”.

There are reasons behind this which I also want to touch on. It was super important to understand exactly where our results were coming from. We could also verify and validate those results to understand and explain to others in the business, why you should use this tire or this wing, because of these parameters that we can independently look at. As opposed to having a “black box model” that said - do this, so we think we should do this, which is something that you saw in a lot of contemporary approaches at the time.

Another product that I worked on in the digital product space was focused on how we could effectively route large ships and ferries in order to reduce their fuel usage. This provided a really interesting way to look at some of the pieces around human interaction with AI systems. And handling how those humans interact with a computer or AI-generated system, telling them what to do when they didn't necessarily understand why.

Understanding that layer of interaction is also another element we really want to be looking at when deciding on if you should be using an AI product. Even if you can model, predict and output perfectly what the ideal use case should be, look at if there's an additional element to that product. Are your users going to be happy or even come around to the idea of being told what to do by a computer that they may or may not trust?

So that's a couple of early products we worked on, and we then moved on to a couple of other ones, which had similar but slightly different issues. One was really looking at how we optimize energy usage and bidding in the energy space.

So you can optimize purchasing over the course of a year, for instance. This is kind of fascinating from a data perspective. But from a user perspective, our users got really nervous every time the system suggested any kind of short-term sacrifice for long-term gain.

Now, this is a very common strategic play in any kind of purchasing or buying market. Anyone who trades any kind of stocks or commodities will be very familiar with this. And in many cases, the humans that were now suggesting they were very concerned about the recommendations of the model, we're actually very comfortable and confident with making that kind of play when it was their own decision.

However, they felt uncomfortable having that kind of play taken out of their hands. A large aspect we needed to understand was the psychological element of how we can make humans feel as though they can trust this model, and understand what it's doing. As well as understand that if it makes a mistake, which inevitably it will do, what are the ways they can explain and fix that to do better next time.

Then I moved on and was working on AI-driven staff scheduling. So this is actually a surprisingly complex, analytical problem to solve. If you're thinking about trying to schedule a bunch of different staff members into a rotor very quickly, your search base becomes way too big to contemplate any kind of brute-force solution.

There’s an interesting contrast to the work that we did in F1. In that, sometimes we use AI techniques to solve problems that are very difficult to solve analytically, due to having a very large search space. In F1, because it was very competitive and had a lot of funding around solving these problems, the solution was just to buy more CPUs and motors, and just run them 24/7. We’ll constrain the parameters where we can, but effectively, we'll just search very large search spaces to find the kind of global minimum optimum solution. If this takes many hours of CPU time to find, then that's just fine because the benefits are worth the trade-off.

In contrast, if you have a coffee shop or even a chain of coffee shops, you can't necessarily justify the cost of doing that, in order to determine your weekly staff schedules every time a new employee joins or takes a holiday. So we look to other techniques to see if there are ways we can be a bit cleverer, and really look at how we can apply different techniques. This may actually be quite a good use case for AI. Where the cost-benefit doesn't really stack up with doing a more analytical approach.

I also want to mention my current role, which is in a climate tech startup. Here, it's increasingly becoming a space that's dominated by data, and really drawing the insights from that data for our customers. Rather than just presenting the data, we have to understand it in a scalable way, and how we can deliver that insight. We're looking at all sorts of approaches to taking things more into the AI side of the spectrum.

Needles AI: AI for AI’s sake

Enough about me, I wanted to also go into some of what I consider, needless AI's. This idea of AI for AI’s sake is something that we see often, and there are three examples I want to run through.

Before I do, let’s look at the areas where you’re likely to see this needless AI:

  • Someone did an online course in AI and wanted to try it out for the sake of learning (novelty results).
  • The business had a bunch of data and wanted to “get some use out of it” so got someone to come in and “do AI.”
  • Tooling became so simple to use that non-technical users could implement their own AI solutions without thinking through why.

Some of the examples I’m going to explore in regards to this are quite humorous, some of them are quite trivial. But even so, speaking from a product manager’s perspective, we really need to be careful because they impact the customer experience in a way we may not necessarily understand.

I'll give you some examples of that as we walk through these three specific cases. But from a PM perspective, any change that we make to our product should be deliberate, should be considered, and should be something that is improving our user and customer experience, rather than something that's just kind of interesting for us to chuck in there.

AI-driven pizza?

So there's a couple of examples that have popped up over the past few years concerning this. The first one was a pizza restaurant that was using an AI toolkit to generate new pizzas for customers.

So I guess the question that I would ask as a customer, or as a user in this space - is why?

Why do I want to go into a restaurant and have an AI put in a bunch of ingredients, and then put together a pizza, that is likely absolutely awful because it doesn’t have any kind of awareness of what a human might want or not want on a pizza.

I think it came down to someone who did an online course or degree in AI, learned all these different techniques, and also happen to own a pizza restaurant. So decided to apply these techniques and have an AI-enabled pizza restaurant and serve AI-driven pizzas.

But what problem is this solving exactly?

I think the problem that it's solving is someone wanted to do some AI side projects and decided to put it into a real product that they're selling to customers. Of course, it's amazing to be interested in the AI space, and it's great that you want to do side products. But don't try and then shoehorn that into a real product that then compromises the actual product that you're trying to deliver. Namely, adding AI to a pizza restaurant that really does nothing except lead to the worst customer experience.

Convoluted craft beer subscription box

The second example I found was a craft beer subscription box. Usually, with this kind of business model, you pay a bunch of money each month and they just send you a box that has a variety of different things you might want to try in it. Sometimes they ask do you like dark beer, do you like certain flavors - and they tailor it to your tastes based on that.

There was an example of one of these that tried to incorporate an AI toolkit into it. So rather than just asking what kind of beer you like, this would send you a questionnaire and a survey that asked things like your age, your occupation, your food preferences, your postcode, and a variety of other things that, to me as a user, actually don't really correlate with what kind of beer someone likes.

They then created a recommendations engine and sent users a box based on similar users to them liking the same kinds of beers, based on these extra questions. Now, the question I’ll be asking is - you've got a bunch of data as a business around orders, and you want to get some use out of it, correct?

So my hypothesis is they've hired someone and asked them to “do some AI with this” or “do some recommendations.” Or they’ve noticed that Netflix or Amazon has all sorts of recommendation engines and they know these are built on AI. So they think this is a really exciting space to get into. But was that useful? Did it provide a good user experience? In my case, it didn't.

AI-generated hotel reviews

There's a final one I just want to mention because I think it's hilarious, which is a hotel that used something like GPT-3 or GPT-2, or one of these text generation algorithms, which are incredibly sophisticated nowadays. They used it to create hotel reviews…

As you can imagine, there were some hilarious results, and you should definitely Google them because it's really fun to read. But we should also consider that could be far more insidious in the type of application that was used for.

In this case, the reason behind it was the tooling became so simple to use, people who didn't really have any technical awareness, or any kind of technical background, could just implement it. They could just implement the solution without having to think through why or have the necessary learning or process you need to go through.

This is a trend that isn't going anywhere. There are more and more toolkits that are increasingly easy to use for non-data literate or non-AI literate people. It's something that as product managers, we should really be aware of.

Why do companies try to make their product ‘AI’?

The other question is - why do companies try and make their products AI and there's a couple of different quotes that we often see here:

  • “It’s good branding.”
  • “It’ll learn by itself, so we don’t need more devs.”
  • “Everyone else is doing it.”
  • “Seems like it will be flexible to all of our user’s needs.”

We can see this increasing proliferation with AI products in this space, and there's a couple of different reasons behind why the decisions are made, either from a perception perspective or an actual engineering perspective. And what the drivers of those are within the company.

It’s good branding

So the first one that we see often is a branding and messaging question. Like it's really good branding to be AI-focused or AI-forward. This applies on the customer side, on the talent side, and to a large extent on the investment side of the business.

The challenge here is that, in many cases that empirically kind of works. If you're an AI-driven company, then you probably will get more applications from people who want to join you. So the companies see a recognizable boost in these kinds of areas and focus more heavily on that aspect.

Now, none of this is about whether this is a good reason or a bad reason, I just want to dissect what those reasons are.

It’ll learn by itself, so we don’t need more devs

The next thing is then looking at something that we see, which is - “oh, well, if our code learns by itself, then we won't need to have many more devs in the future, so we can save costs.”

Anyone who's ever worked with this technology will likely be laughing at this sentiment and thinking - obviously, that's not true. But you can understand why that's an attractive proposition, or an attractive aspiration, for someone who's just seeing all these companies who are saying they’re saving loads of money through AI systems. Systems that are automatically processing records, as opposed to thousands of people classifying records manually.

They naturally think it sounds amazing, and think about how much money it can save.

Everyone else is doing it

The other thing we see is companies who see their competitors doing it and go - well, if everyone else is doing it, then I guess we should probably hedge on that front and do it too. In some cases, you get more companies deciding resources to talking about how they're using AI than actually how they're investing in building AI products.

This idea that everyone else is doing, so you should, lasts to a certain extent, but often no one's doing it and everyone's just talking about doing it.

Seems like it will be flexible to all of our user’s needs

The other kind of excuse is that it’s a flexible solution so we don't have to really define our user needs in advance. This kind of thinking comes from the idea that we’ll just get the computers to solve the problem or make the model’s problem solve some attractive proposition, but it doesn't really work in reality usually.

Again, none of these are necessarily bad business reasons to use AI. But we should really assess them from a product and engineering perspective as well, before allocating significant resources to a given strategy around why you’re looking at using AI in the business.

Stepping stones to AI products

Next, I want to go through the road to AI and how we can progress along that journey.

There are a few different areas to focus on:

Manual processing & traditional algorithms

One is to look at a very manual processing piece. That's kind of where a lot of businesses start. I actually had an interesting client a few years ago, who told me they had a product, they’d actually trained a series of organic neural nets to do X, Y, and Z tasks.

What he actually meant was, I've shown one of my employees in a call center, how to do this, and someone once told me that the human brain has a neural net. Which I find quite an interesting kind of characterization of how to implement AI. But there are a variety of ways that you can actually get good scaling by doing manual processing and call centers are a good example of this.

They can grow to support quite large businesses, and scale fairly well. You can also then have a fairly traditional algorithm. So this isn't trying to do anything fancy, it’s just following a flowchart basically. And again, if you have a series of nested “If” statements, that can define your problem pretty well and define the root to a solution. This can be quite a good approach.

There was an interesting story that popped up in tech news a few years ago, about different companies being caught using, effectively a Mechanical Turk approach, while passing it off as an AI solution. A lot of these are posts that fit into step one or step two, are actually really hard to do in AI. For instance, like document classification, or scanning, or a lot of these things we’re building an AI to do that is actually very tricky. But hiring a bunch of people that can do that fairly quickly, with decent tooling, actually really scales to quite a large extent.

Heuristic-based decisions & human-led heuristic improvement

The next aspect we can look at is how we use heuristic-based decisions. This is basically using a rule of thumb, and we can program computers to use rules of thumb, where the edge cases break down analytically. But we have pretty good rules of thumb around the main body of the analytical space we're looking at.

From there, and it kind of depends on where you want to draw that line as to what is AI, and it depends on your perspective to a certain extent. But from the point of what you're looking at heuristic-based decisions, you can then look at how we can improve those heuristics.

Some of that heuristic improvement can be human-led, some of that can be machine-led, and the machine can learn from itself. There's a very broad spectrum with a lot of different approaches in that spectrum.

Self iterating learning cycles

Then we can start to look at semi-supervised and unsupervised learning approaches. Here is where we can define our problem well enough to then use techniques, such as reinforcement learning use techniques, and generative adversarial networks, to really allow that computer to iterate with itself. And iterate at the speed the computer can iterate rather than the speed at which a human can help a computer iterate. This is kind of an interesting spectrum that we can look at and decide where we want to fit in.

The key takeaway here is to really bear in mind is that there are great examples of really solid workable projects, and products at all these levels. So it's not necessarily that the bottom of the list is better than the top of the list. But there are different approaches that need to be thought about independently.

So, what do we do about it?

I just wanted to end with a few recommendations and learnings that I've taken out of the last few years of working in this space.

  • One of them is really understanding and being conscious of the specific benefits and flaws of the tool that we're using. It's not just with AI but with everything. Really understand that it's a toolkit, and it has pros and cons.
  • The next is understanding it's not a panacea for everything, it's not going to solve all your problems. And actually, there's a lot of really hard work to be done with this space.
  • The third is looking at a Venn diagram. So there are the areas of a product that don't work well with traditional techniques. And areas that do work well and are good candidates for AI-driven approaches. Just being in one of those circles on the diagram isn't a good reason to then use AI. You really want to find areas of your product which are in the intersection between both those circles, that really allows you to say this is a good strategy we should employ.
  • The other recommendation is how are we prioritizing and engaging users and human users in the tools that we're creating? That can't just be you saying; here's a black box, and it's going to do some stuff and good luck. We actually need to prioritize building explainability and understanding into the product. We can do those through a variety of different ways, like transparent scoring systems, and look ahead forecasts and progress dashboards and all sorts of things. That's something that we really need to think about carefully when we're building these kinds of products.

Thank you.