We all want to deliver growth at our organizations, demonstrate the value we bring in a tangible way, and be strategic, right?

In this article, I’ll talk you through four steps toward delivering growth: defining metrics, measuring impact, quantifying impact, and productizing and realizing wins. I’ll explain each one in detail and give some examples of common pitfalls to avoid along the way.

A little bit about me

I’m a product manager at Grammarly. In my role, I oversee and implement key business growth strategies in close partnership with our product team, engineering team, product marketing team, and various business stakeholders.

I started my career as a management consultant at Oliver Wyman, and before coming to Grammarly I got my MBA at Harvard Business School.

What is Grammarly?

For those who aren't familiar with Grammarly, our mission is to improve lives by improving communication. We strive to do that by helping all people feel understood, wherever and whenever they communicate.

Our product is a digital writing assistant that works across platforms and devices. Our browser extensions are particularly popular, but those are only part of our suite of offerings. We have our Grammarly Editor, which is available as both a web-based and a desktop application. Additionally, we have add-ins for Microsoft Office on both Windows and Mac, along with the Grammarly Keyboard for both iOS and Android, which helps you when writing on the go. For iPad users, we have a special offering that delivers our keyboard and editor in an integrated experience.

Step 1: Defining metrics

Everyone wants to deliver growth at their companies—hopefully, that's why you're reading this. But before we do that, it's important to define what growth looks like and how you're going to measure success.

It's important to define what success looks like before you even get started. When I think about defining success, I find it's helpful to start with the end goal and work backward into success metrics.

When we talk about a goal, we can think of it as how you want users to engage with your product. Your goal might be different from an OKR, which is specifically how you define the objective and the key result that you want to achieve. It's important to define success and your metrics before you start to keep yourself accountable. Otherwise, it can be tempting to go back and revise your goals after an experiment is completed.

Let's walk through what your thought process may be:

  • Starting at a high level: let’s say our OKR is to increase paid conversion by 10%. The goal = increasing paid conversion.
  • We might have a hypothesis that a new feature that just launched will improve the value proposition and increase upgrade rates.
  • The signal might be users visiting the pricing page or inputting their credit card information. They may be a variety of signals.
  • The actual metric that tells us that we are achieving our goal is new paid users.

Step 2: Measuring impact

Nowadays, we are inundated with data and have multiple data sources to choose from. Depending on the tools that your company offers, you may have access to surveys, A/B tests, transactional data, or performance data. One of these may make more sense than the other for the metric you are hoping to achieve.

Unfortunately, though, there are times when measurement just won't work. For example, it may take too long to get the data you're looking for. If your goal is to increase annual subscriptions, you would need to wait a full year to measure the actual impact, which is unlikely to make sense for you in practice.

In situations like this—whether it takes too long, the sample size is too small, or it's difficult to get the data—you may have to look to proxy metrics to help you get signals indicating that you're moving in the right direction.

Step 3: Quantifying impact

Now let’s jump into quantifying impact. To do this, we need to find a way to quantify our results and share those with our team, managers, and leadership. This is often the most difficult step.

To help think through measuring impact, I ask myself these questions:

  • What was the impact on the metric?
  • What part of the user lifecycle does this new experience or feature affect?
  • What user segment does this new experience or feature target?
  • How much revenue is coming from the target user segment?
  • What percentage of the user base does the target user segment make up?

Unfortunately, there's a catch. We can't always assume that the results you decide to measure will give the full picture of what’s happening. Here are a few common pitfalls that people can make:

  • Ignoring side effects: Your product usage may have increased, but you're cannibalizing another product. If you work at a company with multiple products, it's highly advised to learn about and keep in mind some adverse metrics so your work doesn’t hurt performance elsewhere.
  • Short-term vs. long-term impacts: You may see a short-term improvement in your metric—but over time, it will go back to the same or lower levels.
  • Vanity metrics: More users may be clicking, but there’s no change in upgrade rates.
  • Attribution: You're assuming that there’s one change that you made drove an increase in your metric. But you actually made three other changes at the same time, so you can't isolate it and attribute the success to just this change.

Step 4: Productizing and realizing wins

This is the exciting part—you ran an experiment, you quantified the impact, and you are ready to ship! It’s important to caveat before we celebrate too much. With growth and experimentation, two plus two often equals three and not four.

There are several important reasons why this may be the case. They are useful to keep in mind so you can level-set your expectations. This helps you identify a situationn in which you may need to re-run an experiment from a few quarters ago.

  • Novelty effect: The first is the novelty effect. We all love that new shiny toy—and the same true for users. But after time, that novelty wears off.
  • Technology is getting smarter: It’s important to remember the world isn’t static. People adjust their behaviors and algorithms adapt over time. This means if you’ve optimized a certain metric, over time, that lift will degrade because someone else is also optimizing against what you just did.
  • Shifted user action earlier in the product lifecycle: Lastly, you may have shifted a user's action earlier in the lifecycle. What I mean by that is, you didn't actually change the aggregate upgrade rate, for example; you may have taken a user that was going to upgrade in their first month and instead got them to upgrade in their first week. This isn't necessarily a bad thing, but it may mean that it may not have been the big win result that you had hoped for.

To summarize...

Let’s revisit those four steps to deliver growth:

  1. Define metrics
  2. Measure impact metrics
  3. Quantify impact
  4. Productize and realize wins

Hopefully, this was a helpful introduction to the world of delivering growth.