Rahul Agarwal, Product Manager at Hasura, gave this presentation at the Product-Led Festival. In his talk, he spoke about:

Read the highlights below, or watch Rahul’s full presentation on-demand.

Hi, everyone. I'm excited to talk to you about the role of ethics in product-led growth today.

In this session, I'll go by the typical structure of thinking about ethics. What does ethics mean in the modern world? And how can we really be effective and ethical while implementing our product-led growth strategies?

But before we start, I want to address why I’m qualified to talk on this topic and how I enabled myself.

The simple answer is that I’ve worked on technology platforms throughout my life. I’ve worked on e-commerce platforms and B2C on-demand platforms in India and Southeast Asia, and for the past four or five years, I’ve worked on B2B platforms as a service software technology, including API management platforms at Boomi. And now, I’m currently working on the GraphQL Engine at Hasura.

From my experience working with these growing startups and delivering product-led growth, I’ll share some of the lessons I've learned on the way around how to think about ethics.

Ethical issues behind commercial technology

So let's look into what’s happening in the market. I think this will help us ground ourselves in understanding why we need to talk about ethics in the first place.

Product-led growth is everywhere, and companies such as DoorDash, Amazon, and Facebook are great examples of companies executing product-led growth. They began as small startups with a very specific use case for a specific set of users. They executed this very well and listened to the users, and finally became these large companies that all of you use today.

But the consequences and ethics have been put on the side throughout their product-led growth journeys.

We see problems with Facebook around data privacy and addiction for teenagers, especially on Instagram.

We see how bad working conditions are at Amazon, and how the promise of one-day delivery is good for consumers, but not good for the employees on the other end of the spectrum.

We have DoorDash, which is an excellent product-led growth company. I’d say they were the poster child of product-led growth for a while. But for a period of time, they were keeping tips for themselves and not giving them to their drivers.

Uber, one of our favorite companies, broke laws in dozens of countries and misled people about the driver benefits of a gig worker model for the consumer. Yes, you're getting a cab in five minutes, but imagine the implications on the driver's side.

So we have a spectrum of companies here. But you can see there's a problem behind using technology to deliver solutions to consumers and businesses.

Ethical blind spots in AI

The emergence of AI and machine learning is almost pervasive in everything we touch and consume on the internet, in our homes, and in our daily lives.

Let's think about the state of AI right now. There are many blind spots with AI, and if the majority of the world in the coming years uses pre-built models, this poses a huge risk.

Some of the blind spots in AI include:

  • The lack of transparency of the AI systems.
  • The fairness of the algorithms which are backing the entire machine learning experience.
  • The dependence on humans for artificial intelligence to be effective.
  • Our models are only tailored to humans, when our lives are surrounded by other animals as well.

And to see this in practice, let’s look at a beloved tech company, Google. Below is a controversial result from the Google Open Source BERT model, where a very typical question is asked about where a man works and where a woman works. You get some very questionable gender stereotypes.

So if this is the state of AI, and this is where all of our products are going to use AI behind the scenes, there’s a problem. And this is why we need to start thinking about ethics as product managers, software engineers, and leaders in the technology world.

The problem of accountability.

Very simply put, accountability is something that’s missing. People making noises are not tolerated in big tech anymore. It's not a one-off thing. It happened last year when an AI researcher was fired after carrying out critical work at Google. And she happened to join Hugging Face, where she works on ethical fairness and using AI with a fair lens.

But that's not an isolated instance. We have other Google engineers quitting the company over their treatment of the AI researcher, and very recently, another researcher left Google to save AI’s future.

So you can see that there’s an inherent problem with how we’re using technology, and accountability is also missing.

Defining ethics

This brings me to the ‘what.’ So before, we were talking about why I think we're on the same page and why we need to think about ethics in the first place.

This is the traditional meaning of ethics that I borrowed from Britannica.

The important things to focus on with this definition are what is considered morally good and bad, and what’s considered morally right and wrong. And ethics defines a set of principles that you judge on the spectrum from good to bad and right and wrong.

So what do we do now? We know why we need to think about ethics and what the definition of ethics is, but how should we go about being more ethical as people who build technology products? If product-led growth is here to stay, how do we navigate this new world?

3 focus areas for establishing an ethical company

Here’s a framework to think about. I’ll go into all of these pillars in-depth and give you a mental model on how to be ethical by keeping these three things in mind: customer, usage, and employees.

Customer

Let's first look at the customer as the number one pillar of how to be ethical. The main point here is that being a product-led company helps you adhere to ethics.

Usage matters the most. And the concept that makes product-led growth itself ethical is that it’s tough to hold on to this principle of ‘usage matters most,’ and that we’re building technology products for the user.

The essence of product-led growth is all about making the user happy and helping the user function better in their lives.

Being the user in the room is tough because engineering wants to build what they like, and leaders have their own opinions. How can you be data-oriented when building something new if you don't have data? So usually, when a feature is getting developed, either you just agree with what the leaders are saying, or you go along with what engineering wants to build.

But as product leaders, we really need to be the user in the room. And just being the user in the room and focusing on usage is ethical enough to start the journey.

Secondly, is the revenue side. Building shelfware that no one uses isn’t ethical because you’re bringing talented people into a room or remotely, and making them work for multiple hours for something which isn’t getting used at the end of the day, just because sales and marketing are asking what to build and not listening to users in the process.

So these are the two pillars of the customer team. Being the user in the room really helps in being ethical. And yes, taking sales and marketing as a data point, but creating shelfware or creating something which isn’t creating value in this society is unethical in itself.

If you’re implementing product-led growth and you’re following good principles around user empathy and user journeys, then I think you're on the right path.

It's not that we can go into the usage pillar right now and say that we’re doing a good job here and the rest doesn't matter. That's why I've presented this framework in such a manner that it's sequential. You go from customer to usage, and then you finally make sure that you take care of the employees as well.

Factoring modern slavery into the definition of ethics

Before we look into the other two other layers, I want to provide some more research on another aspect that isn’t brought to attention a lot. And while going into that aspect, I think the usage part will become very clear.

This particular research is that slavery as an industry is still dominant throughout the world. Below is a Global Modern Slavery Index map. The darker countries on this map have more problems with modern slavery than the lighter countries. But you can see that there isn’t really any white on the map.

So I'm saying that there’s more information now by default. Everyone is informed. The definition of slavery has also changed, and more visibility into how slavery is still prevalent will help us as we build technology products.

Look at DoorDash or Uber and how they provide value to consumers while taking value from other human beings, whether they’re delivery people or drivers.

The Global Modern Slavery Index measures four things: basic needs, inequality, disenfranchisement, and effects of conflict and government issues.

So if modern slavery is still prevalent and we’re somehow still leveraging it to build products, I think it's time for us to redefine ethics. There is increased knowledge of how the world is working, and there’s the advent of maker tools with technology. So by updating the definition of ethics, it becomes more practical. It catches the eye, and it becomes a priority.

With technology, consequences are much more significant, and we need to make sure that the democratizing of technology doesn’t happen at the expense of human life.

So this definition of ethics is:

Now, if you see it from this lens while building products, you won’t think of ethics as a secondary activity or something which can be considered afterward. If you start reading ethics with this lens, it becomes more prevalent and hopefully more practical as you build products.

So with that, let's look at how checks and balances will help you with the second pillar of the framework, which is usage. And there are five aspects to it.

Usage

Now, usage can lead to addiction, as you can see with social media. Usage can also hinder privacy. At the expense of the privacy of users, you’re creating growth. Usage also depends on responsibility and how the product is being used. Is it being used to harm other people?

Accessibility also matters. Is the product or solution only accessible to a few people in the world? Or is it highly accessible and really democratizing technology?

And is it sustainable in getting market share or getting the voice of share in the market? Are you really thinking about how it's impacting the environment and nature itself?

So these are the five pillars around usage, and I’ll deep dive into a couple of them just to give you a flavor of how to think about practicing these.

Addiction and privacy are ethical choices that you as a product leader need to make early in the product development cycle. It's more about design and features than a bug.

With addiction, I’d say we’ve all looked at the model of giving certain rewards to users so that they come back to the platform. I’d also say it's a dark pattern that’s been used by tobacco and cola companies going 50 years back. And now we’re using the same flywheel model on our software products.

Just because it’s a new way of making products doesn't mean that we’re not making the same mistakes we made half a century ago. So just take a step back and see whether this model is actually enabling the lives of your users or making them addicted to something they can’t get rid of later on.

And secondly, privacy needs to be proactive, user-centric, and transparent by design. If you have five features to prioritize and one of them is privacy, I’d say it's not a prioritization problem at all because privacy needs to be part of the feature set.

And if you're prioritizing privacy over other features, this is where the problem lies. It’s inherently the functional aspect of the product. So when you say the product is done and the development is done, privacy should be baked in already.

So going back to the checks and balances, when you're thinking about the roadmap or building a feature, think about these different parameters and think about privacy. What are the different aspects of my product which could leak user data? And how should I prevent it? Should I do it right now or should I wait for a user to file a complaint and then look into it?

So these are some of the checks and balances that are very important for product development.

And with this, I wanted to share more research into the market and how to make it actionable within your teams.

So given that the focus on ethics is relatively new, we have certain modern companies such as Hugging Face, that are building their own ethical charters as they work on models, which will touch almost every aspect of our lives.

You have the ‘values for the project’ at Hugging Face, which is an ethical charter they came up with.

We also have the Institute for Ethical AI and Machine Learning. They have principles out there that you can just borrow or learn more about how to incorporate into the product development lifecycle.

Then there are Ethical Principles for Web Machine Learning by W3.org as well.

So, I presented you with a problem, and I presented you with a framework on how to think about this problem. And these are tangible changes you can make at your organization right now.

Maybe you could collaborate with your team to come up with ethical principles which make sense for your market and your product. Or you could borrow or get inspiration from some of the work done by organizations that are already in the market.

Employees

So, with that, I come to the third pillar of how to be ethical in the product-led growth cycle.

We talked about the customer and usage, but the foundation layer, I’d say, is employees. And I say that because personal commitment is essential to tackle such a difficult and complex topic.

At large enterprises, we have ethics training, and we usually do it whenever the fiscal year starts. There’s a big mandatory checklist from the entire org that we need to finish, and so we do that 30-minute or one-hour ethics checklist and then it’s done for the year. We never think about it again after that.

What’s the other alternative? Companies hire a Chief Ethics Officer or an ethics committee.

At the end of the day, these things won’t make a difference because ethics is very related to personal commitment.

And there are two facets, which I've presented below: personal character and culture.

As leaders, you can make sure that you're selecting people with characters that have some of these fundamental values, but you definitely have a lot of control over the culture that you can create and then drive in your organization.

So under each of these are further tenets. Under character, you have empathy, morality, courage, and purpose. So look at those values and make sure that you're cultivating such values within your own team, or you're selecting people who show such values or think that they're important.

And on the cultural side, these are some of the things you can do to drive culture. Having the right mission. Why do we exist as a company? Why are we building this product? Why do we need to code at all? What’s the larger mission?

Trust in employees and that they can make decisions, and that it's not a political culture where it's very top to bottom. And if you're not trusting the people on the ground who are actually building the technology, then how can they have a personal commitment to provide ethical value to society?

Org structure also really helps in culture. Are there siloed departments in your organization that don't talk to each other? That’s another ingredient for a bad culture and eventually unethical products out in the market.

Diversity and mental health are two aspects that really help in defining a great culture. And I believe that culture is very hard to sustain but also very easy to break.

Making sure that there are different opinions, backgrounds, and thoughts will really help in making sure that you're bringing ethics into the picture as designed in your organization.

The companies that I've worked with, both in the past and currently, really value diversity, and you can see that shine through in the product quality.

And then we come to mental health or work-life balance. At the end of the day, we’re knowledge workers. Time management and mental health are important so that we can think clearly and presently, which is important to make these hard decisions and make prioritization choices when building the product roadmap.