The product management community has spent the last two years grappling with generative AI's impact on everything from team structure to pricing strategy.
But buried within those conversations are lessons that extend well beyond AI itself, lessons about how product organizations absorb transformative technology, manage its limitations, and figure out how to monetize capabilities that customers don't yet fully understand.
Quantum computing is approaching a similar inflection point. While the technology remains earlier in its commercial maturity than generative AI, the strategic challenges it presents to product leaders are remarkably familiar.
Three recent conference talks, delivered by practitioners who've been navigating AI integration firsthand, offer a surprisingly useful lens for thinking about quantum's trajectory, even though none of them set out to talk about quantum computing at all.
The speakers are:
- Loc Nguyen, ex-Vice President of Product Management at MicroStrategy, who spoke about integrating generative AI into business intelligence platforms,
- Anmol Rastogi, Head of Product at Amazon, who explored the relationship between AI prediction and optimization in decision-making systems, and
- Katherine Shealy, Principal Product Marketing Manager at Zuora, who examined how SaaS companies are struggling to monetize their generative AI capabilities.
Together, their perspectives form an accidental roadmap for how product teams might approach quantum computing as it moves from laboratory curiosity to commercial reality.
The precision problem applies to quantum too
Loc's presentation at CPO San Francisco centered on a deceptively simple observation: AI is smart but imprecise, while business intelligence is precise but not particularly smart.
His team ran a straightforward experiment, feeding ChatGPT a CSV file with 50 rows of profit data and asking it to perform basic sum aggregations at the subcategory level. The results from GPT-3.5 were close but not exact. GPT-4 performed similarly. Neither version could reliably do precise mathematical calculations.
“You don't want anything that's unsure of itself,” Loc told the audience, describing how ChatGPT initially told his team that 450 was not 90% of 500, then reversed its answer after being challenged. The underlying issue is architectural. Large language models process text-based data through pattern recognition rather than mathematical computation, which means they hallucinate when asked to do what a calculator does natively.
Loc's solution involves what he calls a “robust semantic graph,” an abstraction layer that sits between raw data and the AI system, providing business context and computational guardrails. The AI handles natural language interpretation and unstructured data analysis, while the BI platform handles precise calculations. Neither system works well alone, and the combination produces something more capable than either component individually.
This dynamic maps directly onto the emerging relationship between classical and quantum computing. Quantum systems excel at specific types of problems, particularly optimization, molecular simulation, and certain classes of search problems, but they're notoriously error-prone. Current quantum hardware operates in what researchers call the NISQ era (Noisy Intermediate-Scale Quantum), where qubits are unstable, and computations require extensive error correction.
Just as Loc described AI needing BI's precision to be commercially useful, quantum processors will need classical computing infrastructure to validate results, manage error correction, and handle the vast majority of computational tasks that don't benefit from quantum approaches.
The practical implication for product leaders is that quantum computing won't arrive as a standalone replacement for existing systems. It’ll arrive as a specialized component within hybrid architectures, and the organizations that build robust abstraction layers between quantum and classical systems will be the ones that extract real value from the technology.

Optimization is where the worlds converge
Anmol's talk in Seattle focused on the distinction between AI and optimization, a distinction that most product teams blur or ignore entirely.
In his framing, AI is fundamentally about prediction, identifying patterns, forecasting demand, and segmenting customers, while optimization is about making decisions under constraints. A machine learning model might predict that a particular flight route will see high demand during the holidays, but optimization determines how to price seats, schedule crews, and allocate aircraft given finite resources and competing priorities.
He walked through several real-world examples to illustrate the point. Airbnb built a dynamic pricing model for hosts that initially failed because hosts didn't trust its recommendations. When the model suggested a price of $80 per night for a listing that similar properties were charging $100 for, hosts understandably resisted.
The problem wasn't the model's accuracy; it was that the team had built the system in isolation without helping users understand how pricing decisions were being calculated. Airbnb eventually relaunched the feature with curated insights explaining the reasoning behind each recommendation, and adoption improved significantly.
Spotify encountered a different optimization challenge. Its recommendation algorithms became so effective at predicting what users wanted to hear that they stopped introducing listeners to new genres. Personalization had become over-personalization, and the company had to deliberately inject exploration into its systems to maintain long-term engagement.
These examples carry direct relevance for quantum computing's most promising near-term applications. Optimization problems, including supply chain logistics, financial portfolio balancing, drug molecule simulation, and route planning, are precisely the domains where quantum computing is expected to offer meaningful advantages over classical approaches. But Anmol's cases demonstrate that technical capability alone doesn't create product value.
The Airbnb example shows that even a superior optimization engine fails if users can't understand or trust its outputs. The Spotify example shows that optimizing too aggressively for one metric can degrade the overall experience.
Product teams building quantum-enhanced optimization tools will face these same challenges, amplified by the fact that quantum computation is inherently probabilistic. Explaining to a logistics manager why a quantum-optimized route is better than the classical alternative will require the same kind of interpretability work that Airbnb had to do with its pricing model, except the underlying mathematics will be considerably harder to translate into plain language.
Anmol outlined a five-layer stack for building optimization systems:
- Infrastructure
- Data
- Intelligence
- Decision-making
- Action
This architecture is worth studying for anyone thinking about quantum product development, because quantum computing will slot into the intelligence and decision-making layers while depending heavily on classical infrastructure and data layers to function.
The organizations that treat quantum as one component within a broader decision-making stack, rather than as a magical black box, will be better positioned to deliver real business outcomes.

The monetization gap is a warning sign
Katherine's presentation at a San Francisco Product-Led Summit delivered a statistic that should give pause to anyone excited about commercializing frontier technology: while 75% of SaaS companies have added generative AI features to their products, only 15% are actually monetizing those capabilities. The majority have either not attempted monetization or are still in testing phases.
The core challenge, as Katherine described it, is that generative AI breaks the traditional SaaS cost model. A typical software product requires significant upfront investment that tapers off into maintenance costs over time.
Generative AI features, by contrast, incur costs that scale linearly with usage, because every query, every generated image, every chat resolution consumes compute resources tied to expensive language model infrastructure. Per-user pricing, the default model for most SaaS products, turns out to be a poor proxy for actual AI costs, since some users consume vastly more resources than others.
Katherine identified four packaging strategies that companies are experimenting with:
- Launching entirely new standalone products (like Adobe Firefly)
- Embedding AI as a value booster to your existing product across all tiers (like Zoom's AI companion)
- Offering AI as an add-on to existing plans (like Intercom's per-resolution chatbot pricing)
- Restricting AI features to premium tiers only (like Box)
She also noted that pricing metrics are evolving from traditional per-user models toward activity-based pricing (per query, per document, per token) and, in a few bold cases, outcome-based pricing (per successful resolution).
Intercom's approach stood out as particularly forward-looking. Rather than charging per chat message or per user, they charge 99 cents per successful resolution, aligning their revenue directly with customer value. Less than 10% of the companies Katherine studied had adopted this kind of outcome-based model, but she suggested it represents where the market is heading.
Quantum computing will face an even more extreme version of this monetization puzzle. Quantum compute time is currently extraordinarily expensive, access to quantum hardware is limited, and the problems that quantum systems can solve better than classical computers remain narrow. Product teams will need to identify use cases where quantum optimization delivers measurably superior outcomes and then price those outcomes in ways that customers can understand and budget for.
The lesson from Katherine's research is that companies tend to give away transformative capabilities for free while they figure out the commercial model, using AI features as loss leaders to drive adoption and retention.
Some quantum-as-a-service providers are already following this pattern, offering free tiers and research access to build familiarity. But Katherine's data also shows that this approach creates real financial strain, particularly when the underlying compute costs scale with usage.
Product leaders planning quantum offerings should be thinking about monetization strategy from the beginning, rather than treating it as a problem to solve after achieving product-market fit.

What ties these perspectives together
None of these three speakers mentioned quantum computing, and that's precisely what makes their insights valuable. They're describing the organizational and commercial challenges of integrating a powerful, but imperfect, technology into existing product ecosystems – challenges that are largely technology-agnostic.
Loc demonstrates that transformative computing technologies don't replace existing systems; they augment them, and the abstraction layers between old and new capabilities determine whether the combination actually works.
Anmol shows that optimization, the domain where quantum computing holds the most near-term promise, requires far more than raw computational power; it requires trust, interpretability, and thoughtful integration into decision-making workflows.
Katherine reveals that even when a technology clearly delivers value, monetizing it sustainably is a distinct and difficult problem that most organizations haven't solved.
For product managers and product leaders watching quantum computing's development from the sidelines, the message from these three practitioners is consistent: start thinking about integration architecture, user trust, and commercial models now, before the technology is ready for prime time.
The companies that waited until generative AI was mature before thinking about these questions are the ones struggling to catch up today. Quantum computing offers the opportunity to get ahead of that curve, but only if product teams treat it as a product strategy challenge rather than a purely technical one.
The technology will keep advancing. The harder work, as these speakers collectively illustrate, is everything that surrounds it.
Want to watch the replays from Loc, Anmol, and Katherine's sessions?
Become a Pro member and gain access to 450+ hours of expert-led video content featuring insights from top product leaders worldwide.
Or upgrade to Pro+ to unlock a complimentary ticket to a Product-Led Summit of your choice every year!
What are you waiting for?



