A few years ago, I walked into a sprint review feeling pretty proud. We had shipped a big chunk of work, and the demo looked slick. Then someone asked a simple question that took the wind out of the room:
“Great… but did it actually make anything better?”
We had a roadmap full of features and a team that could deliver. What we didn’t have was a clear line between what we shipped and what we changed for users or the business. In hindsight, we were doing what a lot of teams do when the pressure is on: we were building fast, but not always building with intent.
Since then, I’ve worked across regulated finance, early-stage products, and public-sector digital services. The environments are wildly different, but the trap is the same: it’s easy to become a feature factory.
This article is about the shifts that helped me (and the teams I’ve worked with) move from shipping more to growing sustainably. I’ll share a few real examples, the mistakes we made, and the practical habits that kept us honest.
From PLG tactics to PLG strategy
Let’s start with a myth: product-led growth (PLG) is not as simple as adding a freemium tier and watching the sign-ups roll in.
Yes, PLG includes self-serve onboarding, in-product prompts, and clever activation loops. But in my experience, those tactics only work when the product strategy underneath them is solid. If your product doesn’t reliably deliver value, no amount of growth hacks will save you.
The best product-led teams I’ve seen do three things consistently:
1. They measure value in a way the whole company understands.
2. They run tight experiments instead of betting the farm on big releases.
3. They build operational muscle so the product can scale without breaking.
I think of these as the pillars of sustainable product-led leadership. Let’s dig in.

Pillar 1: Data that reflects real value (not vanity)
Increasing engagement is not a strategy. Neither is the ship a new dashboard. When I’m joining a team or resetting a roadmap, I start with one question:
What is the measurable change we’re trying to create for users that also moves the business?
That’s the heart of a good North Star Metric. Not a metric you can inflate with spammy notifications, but one that represents real value delivered.
A simple example from my time working on a B2C flow: we had a key “Match me to an adviser” journey that looked healthy on paper – lots of visits, decent time on page. But when we mapped the funnel properly, we found a nasty drop-off right before the final step.
The page was fast enough, the copy was fine… but the form was asking for too much, too soon.
So we did three things:
- We agreed on the outcome: more qualified leads completing the journey (not just more traffic).
- We picked a leading metric: completion rate of the flow.
- We ran small tests on friction: shorter forms, clearer microcopy, and better placement of the call-to-action.
After a few rounds of A/B testing and iteration, conversions improved by roughly 22%, and we saw an uplift in qualified leads. That’s the kind of change you feel across sales, marketing, and service teams - not just in a dashboard.
You don’t have to copy my metrics, but the point is: when you define the outcome clearly, the roadmap gets lighter overnight. Half the “nice-to-haves” stop making sense.

How to make metrics actually useful
Here are a few practical tips that helped me keep outcome metrics honest:
- Pair a North Star with a small set of input metrics. If your North Star is activated teams, your inputs might be: time to first value and successful onboarding steps completed.
- Build dashboards that tell a story, not just numbers. I’ll often add a short sentence above a chart: “If this goes down, users are struggling with X.”
- Create a “no metric, no build” rule for bigger work. If we can’t explain how we’ll measure impact, we’re not ready to build it yet.
This sounds strict, but it’s actually freeing. It gives you permission to say no, without sounding opinionated.
Pillar 2: Experimentation as a delivery capability
A lot of teams say they experiment, but what they really mean is “we’ll A/B test the button colour once we have time”. The shift for me was realising that experimentation is a habit, and habits need structure.
On a large internal platform I worked on, we had to roll out a new workflow that changed how advisers handed work to paraplanners. If we shipped it as one big release, adoption risk was huge – and if we got it wrong, we’d create chaos.
So we treated it like a product experiment:
- We ran discovery workshops to understand where the workflow was breaking.
- We shipped in phases so we could de-risk adoption.
- We added lightweight management information dashboards so leaders could see what was happening in real time.
- We listened hard in the first few weeks and made fixes quickly.
By the time we rolled it out widely, we had processed tens of thousands of requests through the new flow, and we could point to tangible operational improvements (including a reduction in average completion time). The most valuable part was the confidence we built by learning in public, in small steps.

A lightweight loop you can copy
A simple experimentation loop I’ve used across teams looks like this:
- Start with a hypothesis. Example: “If we reduce steps in onboarding, more users reach first value within 24 hours.”
- Define success and guardrails: What moves up? What must not get worse?
- Ship the smallest test. Prototype first where possible. Then build an MVP.
- Decide fast. Scale it, iterate, or stop.
One mindset shift that helps: treat “stopping” as a win. If you kill a bad idea early, you’ve protected your team’s time and your users’ patience.
Pillar 3: Operational excellence (where growth sticks or leaks)
This is the unglamorous bit. It’s also the bit that decides whether you keep the customers you’ve worked so hard to acquire.
I’ve seen products with brilliant acquisition numbers fall apart because releases were chaotic, documentation lived in people’s heads, and technical debt turned every change into a mini-project.
One of the best lessons I learned came from a compliance-heavy integration project. We were embedding identity verification and ongoing screening directly into a product workflow. The temptation was to treat it as just a compliance feature. But operationally, it was everything: we had a fixed go-live date, multiple user groups needed training, data quality issues could trigger false alerts, and if users didn’t trust the workflow, they’d bypass it.
So we focused on operational excellence as part of the product:
- We co-designed the workflow with compliance and engineering early.
- We kept the first release focused (MVP, not “everything at once”).
- We built training and comms into the delivery plan.
- We put visibility in place so leadership could see adoption and issues quickly.
That work reduced manual effort and made the workflow easier to follow, which is how you keep risk down without killing usability. In a product-led world, operational excellence is retention.

Operational habits that compound
If you’re wondering where to start, here are a few habits that have paid off for me:
- Define “done” properly: Done includes analytics, monitoring, docs, and support readiness – not just merged code.
- Make releases boring: The goal is predictability. If every release is a drama, growth will amplify the drama.
- Invest in shared ownership: Product, design, and engineering should share outcomes, not throw work over the wall.
- Tackle one chunk of debt every cycle: Not a rewrite, just one meaningful change. It compounds.
Navigating the AI era with purpose
AI is everywhere right now, and I’m excited about it – but I’ve also seen teams rush into it in ways that create more risk than value.
Here’s what hasn’t changed: the job is still to solve real user problems and measure impact.
Here’s what has changed: AI can help you do that faster, and in some cases, differently.
A good example is an AI-assisted workflow used to automate parts of an annual review and cost-calculation process. The promise was obvious: reduce manual work, improve consistency, and give users clearer outputs.
The hard part wasn’t building the button. The hard part was trust.
So, we treated governance as a product requirement:
- We added gating and review steps where needed.
- We worked closely with compliance and legal to interpret rules.
- We focused on explainability (not black-box magic).
- We designed the UI so users understood what AI was doing - and what it wasn’t doing.
If you’re integrating AI, my advice is simple: don’t start with the model. Start with the decision you’re trying to improve, and the user trust you need to earn.

A 30-day reset plan to escape feature-factory mode
If you’re leading a product team and you want a practical reset, here’s a plan you can run in a month:
Week 1: Re-anchor on outcomes
- Write down your North Star Metric and three input metrics.
- For each big roadmap item, add one sentence: “This will improve X because Y.”
- Kill or pause anything you can’t justify.
Week 2: Put experimentation on rails
- Agree on a single template for experiments (hypothesis, success, guardrails, duration).
- Pick one journey (onboarding, activation, renewal, upgrade) and run one small test.
- Share learnings publicly, even if the test “fails”.
Week 3: Make delivery safer
- Define what “done” means for your team (including measurement and support readiness).
- Run a retro specifically on release pain: where do things usually break?
- Fix one recurring issue (documentation, handover, monitoring, QA gaps).
Week 4: Build the feedback loop
- Sit in on two customer calls (or watch recordings).
- Spend one hour with support/success teams and collect the top five pain points.
- Bring one “voice of the user” insight into your next planning session.
If you do nothing else, do this: tie every build to a measurable outcome, and then check whether it happened. That habit alone will change your roadmap.
Product-led and purpose-led
At the end of the day, product-led growth is a leadership discipline.
It’s choosing outcomes over output. It’s building a culture where learning beats guessing. And it is investing in the operational foundations that make growth sustainable.
If you’re feeling stuck in feature factory mode, you’re not alone. Most teams drift there because it’s rewarded: you ship, you look busy, you move on.
But the teams that win long-term are the ones who can answer the hard question after every release:
“Did this make anything better?”
When you can answer “yes” – with data, stories, and real user impact – growth becomes less of a scramble and more of a system.



