The concept of Artificial Intelligence has been around for over 70 years. Yet, it’s only burst onto the product scene in a big way in more recent years. Often confined to a big hulking killer robot on the big screen, AI does carry with it an aura of the unknown.
Far from dystopian futures, AI now plays a role in all our lives. From the Google Home you bought your mom for Christmas, to the recommendations seen after each Netflix binge, AI has infiltrated its way into our daily routines.
This sparks two big questions, why now, and what next?

Why now?
The reason AI and Machine Learning are only now being incorporated into products is a two-pronged answer.
Firstly, both AI and ML require an unreasonable amount of data to prove worthwhile.
Data is being collected at a faster rate than ever, with most estimates indicating that 90% of all the data that currently exists was collected in the past two years.
According to Nodegraph, there are currently 4.6 billion people online, 5.1 billion mobile phone users, and 2 billion online shoppers.
That means a lot of data’s being collected from a lot of people.
To be more precise, in 2019 PwC believed there to be 4.4 zettabytes of data in the world. A zettabyte is the equivalent of 1,099,511,627,776 gigabytes.
The IDC predicts by 2025, there will be 175 zettabytes of data in the world. That’s 192,414,534,860,800 gigabytes.

For context, that would take 1.8 billion years to download. It’s almost incomprehensible just how much data that is.
If your brain doesn’t hurt too much already, we’ll carry on. Not only is there now an abundance of data for ML and AI to process, they now have the power to do so.
The first computer, ENIAC, could get through around 5000 problems a second back in 1946. Sounds impressive, right?
IBM’s Watson, the computer that competed on Jeopardy in 2011 (and won) could crunch a million books per second. According to TOP500, which measures the most powerful computers in the world, China’s Tianhe-2A can reach 61.4 petaflops.
A petaflop is a unit of measurement that equates to a quadrillion floating-point operations per second. And the Tianhe-2A can do 61.4 of those!
That machine is currently ranked number four of the most powerful supercomputers in the world.
To briefly summarize the answer as to why now, basically, it’s because we can!
What’s next?
With great power, comes great responsibility.
That’s right, collecting and handling this data has brought with it some incredible advances, but also some big issues, or as we like to call it, The Jurassic Park Dilemma.
Put as beautifully as it could ever be, Jeff Goldblum’s accusation that “your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should” has rung true for those dealing with data.

From political problems encapsulated by the Cambridge Analytica scandal, to Facebook’s covert collection of data being ruled illegal, the world has woken up to a new reality.
Every decision you make online leaves a footprint.
Every time you pick up your phone, turn on the TV, make a contactless purchase in the store, you’re being harvested.
It’s impossible to make it sound anything other than sinister, but companies just love you so much they really do want to get to know you - every single thing about you, in fact.
AI had a bad enough reputation as the big bad robot that was going to take over the world. Now, it has to deal with consumers feeling their privacy has been violated. Not only is there an issue of trust about the robotic element, but as with anything made by humans, there’s the human element.
Several cases have emerged of data sets containing inherent racial biases, and algorithms present in the AI products reflect those prejudices. Take a quick Google image search of the term “healthy skin” predominantly resulting in images of light-skinned women, for example.
Clearly algorithms can generate results for flawed reasons, primarily because ML models can reflect the inherent, often unconscious biases of the humans behind them.
As AI gets more embedded in everything, from how data is collected to how data sets are defined, the issue of trust is… well… a really big issue!
AI leaders know consumers need to understand why they should give information to an app and whether the benefit of them doing so is greater than the perceived cost. This has led to changes in the legalities of collecting data, such as the 2018 GDPR legislation.
For product managers, it has led to a trust problem. Building trust with consumers in an age of AI is being built upon transparency.
AI… explain yourself
It’s certainly clear consumers need to know how AI systems arrive at their conclusions and recommendations, in order to trust their decisions. Making it essential that PMs who are working with AI focus on clear and concise explanations on how artificial intelligence is used within their products.
If you’d like to get a greater understanding of AI, and you’re seeking all the info and skills you need to boost your Product career, check out our AI Product Manager Accelerator Program. The AIPMA course is designed from the ground up to equip you with the modern tools that are crucial for any PM of the AI era.
When an AI system can better explain why it’s recommending something, consumers will gain more insight into its decision-making. This will lead to greater trust, along with better customer retention and overall satisfaction.
We all want to know what an AI system is doing when it interacts with us right?
What info is it collecting? Why does it want to know our location? (Bit creepy) And what are the benefits of us sharing this info anyway?
Granted, there’s likely always going to be a firm tradeoff between utility and privacy, but creating more transparency on how information is used, while providing consumers with more user-control and choice, is a key challenge.
When it comes to data, it appears further progress must be made on making more data available, while keeping privacy and potential risk at the forefront. The role of AI decision-making in products will need to be constantly evaluated.
AI Product Managers of course need to be focused on the customer, but they also need to be focused on the ethics of AI and the development of transparency. The challenges will be asking the right questions, gathering the right data, collaborating within cross-functional teams, listening to and learning from each other in the right way.
Ever-increasing expectations
The role of product managers is largely fluid as it is. And as expectations are always increasing, it’s essential that Product Managers continue to develop a fundamental understanding of how to correctly leverage AI and ML.
True - as both expectations and regulations change over the coming years, uncertainty will undoubtedly still surround many AI initiatives. Which is why having the ability to keep learning and making incremental improvements is critical going forward.
But hey, challenges are meant to be conquered.
Issues of trust, restrictions on the storage of data and how it’s collected can slow the implementation of AI and Machine Learning in product management for sure. But the need for product managers to embrace the technology, develop alternative solutions and lead the product charge to overcome these challenges will continue to grow.
All to get to a place where AI is used to really benefit consumers and not just the bottom line.
Look out for part 2, where we’ll be looking at great examples of how PMs are overcoming the challenges and embracing the best AI has to offer!
