According to Gartner, over 40% of agentic AI projects will be canceled by the end of 2027. The core issue isn't the technology itself, but how organizations implement it.

AI agents represent a new generation of automation: systems capable of completing tasks with minimal human intervention. But as companies move from pilot to production, many encounter a gap between expectations and real-world outcomes.

Based on case studies, industry examples, and lessons from practice, here are five lessons for deploying AI agents successfully.

1. Align strategy across the organization

Companies typically approach AI agents from two directions: executive mandates or isolated team experiments. Neither works well in isolation.

Bottom-up initiatives often generate promising pilots, but without executive sponsorship, they rarely scale. I’ve seen colleagues build prototypes that improved workflows and unlocked new features, but without budget or leadership support, they stalled. 

This pattern isn’t unique, Mckinsey reports fewer than 30% of companies have CEO-level sponsorship for AI, leading to scattered micro-initiatives with little enterprise impact.

Top-down efforts can also fail. For example, Salesforce CEO Marc Benioff claimed AI now performs 30–50% of company work and envisioned 1 billion agents by the end of the year. 

The statement sparked criticism. Employees argued it overstated current AI capabilities and downplayed human contributions.

The solution? Combine both approaches strategically:

Start with executives defining clear objectives and success metrics. Have technical teams run discovery workshops to assess feasibility. Then launch small pilots with executive sponsors who remove blockers without micromanaging. 

In my current work on building an AI agent system, we followed this blended approach. Executives set the vision and goals. Our team focused on building AI skills through structured learning and hands-on experimentation. 

We validated ideas with proofs of concept, iterated quickly, and created space for ongoing learning on both sides. 

When skepticism arose, we addressed concerns directly before moving forward. The AI Agents space is moving fast and the pace of mutual adaptation is critical to maintain momentum.

The key is creating a bridge between strategic vision and operational reality. Something that requires both top-down support and bottom-up expertise.

AI product management: A human-centric approach
While AI can be a powerful tool in product management, it’s also important to consider your users’ needs above all else. PMs at Meta and ADP share their top tips.

2. Address data readiness early

AI agents don't generate new knowledge, they operate on available information. For most organizations, that information is fragmented and unstructured.

This is the rule of thumb I use when assessing readiness: if you can’t access 80% of relevant data programmatically, or more than 30% of critical knowledge still lives in people’s heads, you’re not ready. 

In my project building threat intelligence agents, this principle proved true, most of the effort went into consolidating data, not agent design.

The risk of poor data readiness? Your agent will hallucinate or require constant human intervention. 

Air Canada's AI chatbot told customers about a refund policy that didn't exist, leading to the airline being ordered to compensate passengers who received incorrect information. The tribunal ruled that Air Canada is responsible for all information on its website, including chatbot responses.

Start by capturing institutional knowledge. Then structure your unstructured data incrementally, focusing on the most critical information first. Build feedback loops to capture and fix data gaps as you discover them.

Using AI for discovery, feedback, and decision-making
Learn how Pendo’s Aly Mahan uses AI in product ops to manage feedback, speed up discovery, and improve product decisions. Listen now.

3. Set realistic performance expectations

Organizations tolerate 5-10% human error rates yet demand perfection from AI agents. This mismatch kills promising initiatives.

Media hype often sets organizations up for disappointment. I’ve seen teams hold agents to impossible standards because vendors claim ‘100% automation.’ 

For example, Accenture promotes agents that can read every insurance submission, a huge leap from today’s reality where half are still untouched. In practice, these claims raise expectations far beyond what teams can reliably deliver.

The mindset shift needed? Benchmark against human performance, not perfection. Klarna's customer service AI demonstrates this approach: it resolves 66% of requests, reduces resolution time from 11 minutes to 2, and maintains satisfaction scores comparable to human agents. They didn't aim for 100% – they aimed for better than human.

In developing AI systems, I’ve learned that accuracy isn’t the only metric. We start with the customer baseline and measure how much faster AI delivers value. 

Accuracy still matters, but when issues arise, we address them openly and improve incrementally. Phased rollouts; alpha, beta, and early customer feedback, help us refine performance and build trust without over-promising.

Focus on handling 80% of cases well rather than 100% perfectly. 

Start with low-risk use cases where mistakes have minimal impact. Internal knowledge searches or data validation build confidence without risking customer relationships. 

Communicate that agents provide confidence scores, not certainties. When stakeholders understand the logic, they're more comfortable with occasional errors.

Implementing an AI-augmented product development cycle
Discover how AI is revolutionizing product development. Learn from eBay’s Product Director how to use AI to develop better products more efficently.

4. Balance build vs. buy strategically

The build-versus-buy decision isn't binary, it's about finding the right hybrid approach.

Fully in-house development seems appealing but often fails. I’ve watched several organizations attempt to build agent platforms entirely in-house, only to hit walls in orchestration, memory, and governance. The reality is, few teams have the specialized expertise required, Forrester echoes this, predicting that three-quarters of such efforts will fail.

Complete outsourcing has its own pitfalls. If everyone uses the same vendor's agent, you lose competitive advantage. Vendors optimize for common cases, not your specific needs.

The hybrid sweet spot: Start with commercial solutions to validate value quickly. Then identify what makes your use case unique, these become candidates for custom development.

Critical skills to develop internally include prompt engineering, data pipeline development, and domain expertise deep enough to guide the agent effectively. 

You don't need to build everything, but you need to understand and control what makes you different.

A product leader’s guide to surviving the AI interface shift
Discover how the latest innovations in AI are changing how users interact with digital and SaaS products forever.

5. Don't overlook operational infrastructure

Many pilots succeed in controlled environments but fail in production – not because of the agent itself, but because of missing operational infrastructure. 

I’ve seen agents run flawlessly in notebooks and staging, only to collapse in production when a data format changed silently. Without monitoring, failures went unnoticed until real damage was done. 

Replit’s recent incident illustrates the risk: their coding agent deleted a production database despite safeguards, showing how fragile operations can be without rigorous controls.

"Building with production in mind" means considering operational requirements from day one. Before any prototype, ask: How will we know if it's working? What happens when it fails? Who can override its decisions? How do we audit its actions?

Essential infrastructure components include:

  • Access controls: Who can invoke the agent and what can it modify?
  • Observability: Logging, metrics, and anomaly detection
  • Cost management: Token tracking, API quotas, and automatic shutoffs
  • Integration safeguards: Rate limiting, circuit breakers, and graceful degradation
  • Incident response: Kill switches, rollback procedures, and escalation paths

Start testing in notebooks, then staging with synthetic data, then shadow mode alongside human processes, before limited production rollout. Each stage reveals different challenges and builds operational confidence.

Final thoughts: Move early, learn continuously

Successful AI agent adoption isn't about perfect technology or massive budgets. It's about organizational learning speed. 

Companies that succeed start before everything is perfect, build incrementally, fail fast, and scale what works.

AI agents are still evolving, but they are already creating value in production. The question isn't whether AI agents will transform business operations, but which organizations will master these five lessons soonest and lead that transformation.