In my experience, the difference between traditional product management and growth product management is the latter encounters more hurdles when it comes to doing our job effectively for a number of reasons.
Throughout my career, I’ve developed several tactics to help overcome these hurdles when it comes to having maximum impact, finding opportunities, launching and measuring success, so in this article I’ll dive into them in more detail and share my key tactics for success as growth PMs.
I'm Axel and I work as a Growth Product Manager at Shopify. I've named my article tactics for growth product management, for lack of a better name, but it might set the wrong expectations regarding - what's a tactic?
What I want to share today is more on the process of how to run your team side of things rather than actually sharing growth hacks and things you can implement for your product.
I want to share a compilation of things and tools that I've used in my role to make my team work more efficiently and grow as a Growth Product Manager. Let's get started.
I’m going to touch first on the difference between traditional product management and growth product management. I'm not sure that's clear to everyone, it's definitely not clear for me, you're going to see that in a second.
From this difference, I want to highlight that in some aspects it's more difficult for a growth product manager to do their job. It's more difficult than a traditional product manager, and I'm going to show why.
The following things that I think are made more difficult for the growth product manager are:
- Unlocking resources,
- Defining output and timelines,
- Finding opportunities, and
- Launching and measuring.
I want to share some of the things I've found useful in my work to execute on those things.
Growth PM vs. traditional PM
Let's start with the difference between a traditional and a growth product manager. I found this definition in a few articles, I'm not sure where it comes from or who has the paternity of this, nonetheless I've seen it in a few different places.
I think it's interesting, and it's also limited in a way to see it like this. Basically, it says that traditional product management is creating or improving value provided to users and growth product management is connecting more people to the existing value of the product.
Why do I think that's interesting? From this definition, you can understand what a growth product management team is going to work on - it's not going to work on building the core product but it's rather going to work on optimizing the core product and creating new distribution channels for the core product.
It's useful because you can actually understand what your scope is, what you have to work on as a growth product manager.
I think it's limited because basically it's very binary and if you take an example, let's say an early-stage startup, you're building your product, you need to build your core product, but you also need at the same time to connect users to your product, to create growth. You need a mix of both.
Plot the difference
Another way to look at this difference is to plot with two dimensions the projects or the things that you might have in your company - the projects that product is working on - on two axes.
On one side, you can have growth and what is more related to bringing in more customers using your product and it might be a bit more short term approach, very fast-paced where you need to be experimenting, you need to be reacting to transient market very fast.
On the other axis, you put the things that are more infrastructure/maintenance side of things that are typically more long term where you want to build for the long term and so on.
Then you plot a few projects that you have. You find on the extreme side of growth with no maintenance/infrastructure, you have top of the funnel expansion, typically what marketing does and onboard conversion optimization. On the other extreme side of high infrastructure maintenance, and not really related to growth, you have things like production engineering or working on bugs or even redesigns.
Then you draw a scope of what project management is doing in your company, or if you have a large company, what is the scope of one team. For most teams, it's actually somewhere in the middle.
It makes you realize that every product team has to work on growth, or at least most of them have to work on growth. Therefore, in theory, there's not much difference between growth project management versus traditional, it's just a matter of using tactics when it's relevant on which projects it's going to be relevant.
The challenge for growth PMs
Despite this, what I've found is that it's usually easier for traditional product managers to define their work, to build a team, get resources, and to define output and timelines. Why?
Because very often, when a company creates a product team or hires a product manager, they either have a feature that needs to be taken care of - what's very often called feature teams where you have a scope that is defined, you're going to take care of this feature - or there is a new project.
There's a new strategic project to build a new feature or a new set of features. Therefore this creates the team. The company or the leadership puts together a team, assembles a team, and is giving this mission to build this vision, this set of features, or this feature, and therefore the output and the timelines are easier to define.
"Okay, in six months, we need to be there and that is pretty much the feature you're gonna have to build".
For growth, it's much more blurry, what growth is, what features you should work on, you very often don't have a scope of features, you're even going on the scope of other teams to implement some things.
It makes it very difficult to do the following things in my experience:
- Unlocking resources,
- Growing your team,
- Scaling your team, and
- Defining the output - what should your team be doing and when?
Impact = T.A.X.S
This is a framework or a formula that I've found useful for me when it comes to how to get resources. You know that if you want to get more resources, you have to show impact, and you have to show your leadership that if they give you more resources, you can bring more impact.
But the impact is very difficult to define. It's very vague, what is impact?
It's not really actionable when you say 'I want to bring impact', you don't really know what to do. The objective of this formula is to split it into different components, and some components are going to be more actionable for you.
It reads impact equals ‘taxs’, but it has nothing to do with taxes, it's just a way to memorize it. For me, when your financial impact personally goes up your taxes also go up, especially in my home country of France, but that was just for the joke.
What does T.A.X.S mean?
It means team, average impact, experiment cadence, and success rate.
The goal of this formula is to split into different components that are going to be more actionable.
Team & average impact
First of all, you're going to have the team, which is your team size. This is something where you want to go to your leadership and say, "Look, if you give me more resources, if you add people to the team, this is what's gonna happen and this is the effect on the bottom-line impact."
The other two things where you actually as a team have a very strong effect on are experiment cadence and the success rate.
The experiment cadence is basically the number of experiments you're going to be able to run and the pace at which you're going to be able to do that. With this, you can directly go to your team and say, "Look, if we want to have this impact this quarter, we need to roll out three experiments, or we need to do one experiment per week."
This sets a productive environment where your team knows that every two weeks you want to ship something. It's motivating.
The other component on which you have leverage is the success rate. The success rate is basically how well are you crafting your hypothesis? How much evidence are you putting into that?
How many good ideas you have or how well you're crafting your experiments is going to make them more successful. You as a product manager, your designer, your data scientist, your engineers, you have a direct impact on that as well.
Once you have more resources, you've managed to convince people with this framework that you can have more impact, or your team is motivated to roll out a certain number of experiments, you need to find those opportunities.
That is also a bit more difficult for growth product managers in general I find because the scope is so large, and because the nature of growth is also to be able to seize some opportunities and some low hanging fruit that might not be taken into account by the overall strategy of the company.
Draw your product’s growth levers
What I found useful to find those opportunities is to start really from the first principles to draw your product and the growth levers to your product.
What are the basic things or levers that if they grow, your product is going to grow?
This is the example of a product marketplace.
If you add more supply, more products to sell, your marketplace is bigger. If you bring more traffic, the marketplace is getting bigger. If you increase your conversion, the marketplace grows, and so on.
But once you have this list, you go to each of those levers and ask yourself 'what can I do to grow the supply? What can I do to grow the traffic? 'And so on.
This is pretty much a collaborative process so you want to involve your stakeholders and your team in coming up with the ideas in this framework.
Drill down and list all growth drivers
For example, if you take traffic, you go then to more and more granular, and list down the channels and the product groups, or in this case its traffic so it's more the marketing channels that you want to list that you currently have.
You identify gaps or some things that may not exist at this point in your marketing mix, and list all that, and then go to more and more granular and list the marketing campaigns or activities that you're doing. This is going to give you ideas of experiments to run.
Again, this is a collaborative process - you want to involve everyone that has creative input in your company, your stakeholders, your team as well.
You end up with a large map like this of opportunities with a lot of experiments that you can run.
I know you can't read anything on the image - that’s on purpose. But you can use this map again to communicate with your leadership and say "Here is what we can do, here are the different areas and different experiments we can do, what do you want us to focus on?".
From here, the goal with each of those opportunities is to go and gather evidence, gather data around them, and understand what is the best opportunity to tackle.
This is great because you can keep this for a long time. Even if in one quarter you're gonna work on acquisition, you can keep it for later when you might have to work on activation or retention, and so on.
You have a backlog of ideas, it's great, you have your team, they're motivated to launch experiments, now you want to actually build your experiments.
I've found a couple of challenges here, too.
Here's what you generally want to do when you launch things, especially small features experiments, you want to understand very, very quickly if you want to go and iterate on that experiment, or if you want to stop it.
For that, you want to optimize two things for it to be as fast as possible, to launch as fast as possible, and measuring things as fast as possible. This is challenging.
Launching because you might be on the scope of some other teams, you have to negotiate things, you have to explain why you want to run these experiments. Very often you're going to face some objections, some hurdles on the way.
I've found a few things that helped me to make that launch as fast as possible, solve as many questions as fast as possible, and go live quickly.
Tactics for launching fast
One is to make sure that with the hypothesis you have and with the experiments you have you define your target audience and you can actually target this audience very precisely.
You want to make sure that when you launch experiments, you're not just blasting to everyone, but you're targeting exactly the audience that you want to be targeting. If you do so, then there'll be fewer questions around saying 'why are we going to show this to this audience or that?'
If this is clearly defined and clearly targeted, that helps you to launch faster.
If you can target your audience you may as well not run the experiment, because also the results are going to be polluted.
Max exposure within audience
Once you've targeted your audience, you want to make sure you get the maximum exposure within that audience. You want to make sure your feature is visible, your experiment is visible because you can't afford to doubt whether the problem was that the users didn't like the feature and didn't want to use it, or it's because they didn't see it.
You want to remove that doubt, you want to make sure everyone has seen it. If you get bad, shitty results, it's because the feature is not good.
Silent launch & sunset is possible
That means you want to make sure before you launch that removing the feature is going to be possible and easy, it's not going to hurt the overall user experience.
You want to also make sure that you're not promoting your feature in a way that is not going to help you understand what's the true impact of your feature. The silent launch is a way to say we're only going to again, target this specific audience, we're not going to blast around and try to promote it to people that is not relevant to.
But as soon as you've defined those things, for me I find it much easier to actually roll out the experiment faster.
In terms of measuring things that's also a challenge because if you run experiments, people will tell you you can't look at the results.
If it's an AB test, for example, before two weeks, or if you're doing something on retention, you need to wait until your cohorts are retained - you need to wait weeks.
But you just can't wait weeks, right? You want to understand after one day, is this working or not? Can I be excited? And you just can't look at the AB Test dashboard or whatever dashboard you have prepared because it's just telling you wrong things. It's not statistically relevant and so on. It's probably showing you the totally incorrect results.
Track early indicators
What I've found useful for tracking very early indicators is these kinds of rules of thumb.
3 out of 5 users succeed
For example, if you do session recordings, and you figure out three or more users out of five are actually succeeding in the task that you define. That's a good sign.
More than 10% usage
On the contrary, if less than 10% of the target audience is actually using your feature, you should probably kill it straight away. Because there's a chance it's just polluting the UI, it's adding also complexity to the code where you just don't need that. You might as well stop and test something new.
More than 50% CTR
If you have something that is very visible, very exposed, like a pop-up, or a banner or something like this, I usually go for a rule of thumb of a 50% click-through rate.
If you were blocking the UI from showing something, and if you have less than a 50% click-through rate, this is not good, you're probably doing something wrong.
If you have more it's a positive sign, you can at least continue and let the experiment run and then see your results on the dashboard.
Those were my tactics, I hope they're useful for you.