Rahul Agarwal, Product Manager at Hasura, gave this presentation at the Product-Led Festival. In his talk, he spoke about:

Read the highlights below, or watch Rahul’s full presentation on-demand.

Hi, everyone. I'm excited to talk to you about the role of ethics in product-led growth today.

In this session, I'll go by the typical structure of thinking about ethics. What does ethics mean in the modern world? And how can we really be effective and ethical while implementing our product-led growth strategies?

But before we start, I want to address why I’m qualified to talk on this topic and how I enabled myself.

The simple answer is that I’ve worked on technology platforms throughout my life. I’ve worked on e-commerce platforms and B2C on-demand platforms in India and Southeast Asia, and for the past four or five years, I’ve worked on B2B platforms as a service software technology, including API management platforms at Boomi. And now, I’m currently working on the GraphQL Engine at Hasura.

From my experience working with these growing startups and delivering product-led growth, I’ll share some of the lessons I've learned on the way around how to think about ethics.

Ethical issues behind commercial technology

So let's look into what’s happening in the market. I think this will help us ground ourselves in understanding why we need to talk about ethics in the first place.

Product-led growth is everywhere, and companies such as DoorDash, Amazon, and Facebook are great examples of companies executing product-led growth. They began as small startups with a very specific use case for a specific set of users. They executed this very well and listened to the users, and finally became these large companies that all of you use today.

But the consequences and ethics have been put on the side throughout their product-led growth journeys.

We see problems with Facebook around data privacy and addiction for teenagers, especially on Instagram.

We see how bad working conditions are at Amazon, and how the promise of one-day delivery is good for consumers, but not good for the employees on the other end of the spectrum.

We have DoorDash, which is an excellent product-led growth company. I’d say they were the poster child of product-led growth for a while. But for a period of time, they were keeping tips for themselves and not giving them to their drivers.

Uber, one of our favorite companies, broke laws in dozens of countries and misled people about the driver benefits of a gig worker model for the consumer. Yes, you're getting a cab in five minutes, but imagine the implications on the driver's side.

So we have a spectrum of companies here. But you can see there's a problem behind using technology to deliver solutions to consumers and businesses.

Ethical blind spots in AI

The emergence of AI and machine learning is almost pervasive in everything we touch and consume on the internet, in our homes, and in our daily lives.

Let's think about the state of AI right now. There are many blind spots with AI, and if the majority of the world in the coming years uses pre-built models, this poses a huge risk.

Some of the blind spots in AI include:

  • The lack of transparency of the AI systems.
  • The fairness of the algorithms which are backing the entire machine learning experience.
  • The dependence on humans for artificial intelligence to be effective.
  • Our models are only tailored to humans, when our lives are surrounded by other animals as well.

And to see this in practice, let’s look at a beloved tech company, Google. Below is a controversial result from the Google Open Source BERT model, where a very typical question is asked about where a man works and where a woman works. You get some very questionable gender stereotypes.

So if this is the state of AI, and this is where all of our products are going to use AI behind the scenes, there’s a problem. And this is why we need to start thinking about ethics as product managers, software engineers, and leaders in the technology world.

The problem of accountability.

Very simply put, accountability is something that’s missing. People making noises are not tolerated in big tech anymore. It's not a one-off thing. It happened last year when an AI researcher was fired after carrying out critical work at Google. And she happened to join Hugging Face, where she works on ethical fairness and using AI with a fair lens.

But that's not an isolated instance. We have other Google engineers quitting the company over their treatment of the AI researcher, and very recently, another researcher left Google to save AI’s future.

So you can see that there’s an inherent problem with how we’re using technology, and accountability is also missing.

Defining ethics

This brings me to the ‘what.’ So before, we were talking about why I think we're on the same page and why we need to think about ethics in the first place.

This is the traditional meaning of ethics that I borrowed from Britannica.

The important things to focus on with this definition are what is considered morally good and bad, and what’s considered morally right and wrong. And ethics defines a set of principles that you judge on the spectrum from good to bad and right and wrong.