Your model achieves 94% accuracy on the validation set, but six months after deployment, the recommendation engine isn’t driving business outcomes. This is a risk that all organizations run where their technically excellent models fail to deliver business value because they were built without product thinking.
This is, in fact, what happened to my team and me.
Most machine learning (ML) teams approach model development like software engineering: define requirements, build the solution, deploy, and move on. But successful ML systems require a fundamentally different mindset. They need to be treated as products with users, evolving requirements, and measurable business outcomes.
The problem with model-centric development
Traditional ML development focuses on the model itself. Teams spend months perfecting algorithms, tuning hyperparameters, and improving accuracy metrics. The typical workflow looks like this:
- Gather data
- Train model
- Evaluate performance
- Deploy to production
Success is measured by technical metrics, such as F1 score, which is a combination of precision and recall, and area under curve (AUC).
This happened to us. Everyone, including the leaders, were focused on how good we could make the recommendation system as defined by its accuracy metrics, which we saw as a representation of how good the recommendations would be for customers.
Customers were engaging, but not in ways that drove business outcomes. Customers were curious about the recommendations that surfaced to them, but the recommendations failed to move the needle on further customer actions, retention, or satisfaction.
Purely focusing on the model works for research projects and Kaggle competitions. But in production systems serving real users, it often fails.
The core issue is that model-centric thinking treats the algorithm as the end goal rather than a means to solve user problems. It optimizes for statistical performance without considering how the model fits into user workflows, business processes, or changing requirements.

Product thinking applied to ML
Product thinking flips this approach. Instead of starting with data and algorithms, you begin with user needs, your company strategy, and business problems. The model becomes one component in a larger product experience designed to create value for specific users.
This shift changes everything about how you approach ML development. Your primary questions become:
- Who will use this model?
- What problem are they trying to solve?
- How does this fit into their existing workflow?
- What does success look like from their perspective?
Take the example of a content recommendation system. A model-centric approach focuses on click-through rates and recommendation accuracy.
A product-centric approach asks deeper questions:
- Are users discovering content they actually want?
- Do recommendations help them complete their goals, or just give us a few extra clicks?
- Are we showing diverse content or creating filter bubbles?
- How do recommendations affect user retention and satisfaction?
In that vein, we stepped back and spent time with real customers and frontline teams. What we uncovered was that customers didn’t just want solutions that looked interesting. Instead, they wanted relevance, timing, and clear value.
Teams wanted tools that fit seamlessly into their workflows, not just another output to interpret.

The full product lifecycle
Product thinking affects every stage of ML development, starting with problem definition. Instead of jumping into data exploration, you spend time understanding user pain points, current solutions, and success criteria. This upfront investment prevents building technically impressive solutions to problems users don't actually have.
During development, you focus on building minimum viable models that solve core user problems rather than perfect algorithms. A content recommendation system might start with simple collaborative filtering that works well for 80% of users, then evolves based on real usage patterns, rather than theoretical improvements.
The deployment phase becomes about user adoption rather than just technical integration. You consider how users will interact with model outputs, what training they might need, and how to integrate predictions into existing workflows.
We rebuilt with a simpler system.
Instead of optimizing for clicks, the model identified patterns of churn risk and recommended actions so customers could quickly gain value from the product depending on their role and industry.
The interface highlighted why the suggestion mattered, so users could trust and act on it. The focus shifted from technical sophistication to usability and business impact.
We also set up a biweekly focus group so that we could gather continuous feedback from customers about the usefulness of the recommendations being surfaced to them.

Measuring success beyond accuracy
The difference was striking. Customer retention improved by 15%, frontline teams reported higher confidence in using the system, and adoption soared because the tool was solving the right problems.
The simpler solution had a far greater impact than our earlier complex one, which was technically very accurate.
This experience made one thing clear: successful ML products aren’t about chasing accuracy metrics in isolation. They’re about treating models as products, anchored in user needs, integrated into workflows, and measured by real-world outcomes.
Product thinking demands different success metrics.
Technical metrics like precision and recall remain important, but they're insufficient for measuring real-world impact. You need to track how the model affects user behavior, business outcomes, and long-term system health.
This broader view of success also means monitoring for unintended consequences.
For example, a hiring screening model might perform well on paper but create bias in candidate selection.
Product thinking includes taking responsibility for these downstream effects, building in safeguards, and monitoring systems.

Continuous iteration and feedback
I also learned that it’s important to bake in continuous iteration based on user feedback.
Unlike traditional software, where you can A/B test interface changes, ML systems require more sophisticated experimentation approaches. You need to test not just model performance but user adoption, workflow integration, and business impact.
Going forward, there are two key things I did differently.
One, I defined holistic metrics for every release, including technical performance, user satisfaction as measured by engagement or task completion rate, and finally business impact such as adoption, retention, and revenue.
I also ensured we had feedback loops built into your system from day one. Users need ways to correct model outputs, report problems, and suggest improvements. These feedback mechanisms become training data for future iterations and early warning systems for model drift or bias.
Secondly, I released much earlier, either in shadow mode to no users, or to a very small group of users, and used that live user feedback to iterate. I found that the iteration cycle became faster and more user-focused.
Instead of waiting for quarterly model retraining, we deployed weekly updates based on user feedback and changing patterns. We focused on responsiveness to user needs, rather than perfect technical performance.

Building cross-functional teams
AI and ML models also need more cross-functional thinking, not just due to technical complexity, but also because we have less of an understanding of the ins and outs of the systems and the reaction of customers.
Instead of isolated ML teams, I gathered cross-functional groups for detailed group discussions and brainstorming sessions, including data scientists, product managers, UX designers, and domain experts.
Each brings essential perspectives that purely technical teams miss.
As a product manager, I helped translate business requirements into technical specifications and vice versa. UX designers ensure model outputs integrate smoothly into user workflows.
Domain experts provide context about user needs and industry constraints that data alone can't capture.
This collaboration extends to stakeholders and end users. Regular user interviews, feedback sessions, and usability testing become as important as model evaluation.

Practical implementation steps
Start small with product thinking principles.
Choose one existing model and audit it through a product lens. Who are the actual users? What problems are they trying to solve? How do they currently interact with model outputs? What friction points exist?
Build user feedback mechanisms into every model deployment. This might be as simple as thumbs up/down buttons or as sophisticated as detailed annotation interfaces. The key is creating channels for users to communicate with your system and closing the loop on their input.
Expand your success metrics beyond technical performance. Track user adoption rates, task completion times, user satisfaction scores, and business outcomes. These become as important as accuracy metrics for evaluating model performance.
Most importantly, involve users in the development process from the beginning. Regular user interviews, prototype testing, and feedback sessions should be standard practice, not afterthoughts.
Your users are the ultimate judges of whether your model succeeds, regardless of how impressive the technical metrics look.
The shift from model-centric to product-centric thinking isn't just about process changes.
It's about fundamentally reimagining what it means to build successful ML systems. When you treat your models as products, you create systems that deliver real value to the people who use them.





