A rising concern for many product managers (PMs) is the ethics and governance of AI usage in products and features. Our upcoming State of Product Management report found that a third of product leaders surveyed had governance or compliance concerns about AI usage in their organization.
These PMs are concerned with ensuring new AI tools are implemented with safety in mind, so they ultimately benefit their customers and organizations. This article aims to provide practical guidance on building an AI governance framework within your company.
What is AI governance?
AI governance refers to the policies, processes, and controls that guide how AI systems are designed, built, deployed, and monitored within an organization. Its goal is to ensure AI is used in a way that is ethical, responsible, and compliant with applicable laws, while still enabling innovation.
For product managers, AI governance provides a practical framework for making informed decisions about risk, accountability, and user impact across the product lifecycle, from early discovery through launch and ongoing iteration.

Why should product managers build an AI governance framework?
The rise of AI in product management over the last few years, including the evolution of the AI Product Manager role, has cemented product leaders as a key strategic driver for AI initiatives. This comes with its share of challenges, risks, and responsibilities.
Data security, regulation compliance, risk management, and gaining user trust all fall onto the product roadmap for PMs to consider and implement.
Building an AI governance framework formalizes your organization's guidelines, processes, and procedures relating to the ethical, responsible, and legal use of AI within your products. It should cover: data privacy, fairness, accountability, and compliance with laws and regulations.
This documentation will allow your organization to benefit from new opportunities, such as implementing AI features and building AI tools while minimizing the risks.

Responsible AI vs. ethical AI vs. lawful AI
Before we dive into the steps of building a governance framework, letâs ensure weâre on the same page when it comes to responsible and ethical AI. While these terms are often used interchangeably, they address different aspects of how AI systems should be designed, built, and deployed.
Responsible AI
Responsible AI focuses on the operational aspects of building and using AI systems in a way that aligns with your organizationâs values and risk appetite.
It emphasizes clear ownership, oversight, and accountability across the AI lifecycle. For product managers, responsible AI means putting guardrails in place:
- Defining who is accountable when things go wrong
- Ensuring AI behavior can be monitored and corrected over time
- Making deliberate tradeoffs between performance, risk, and user impact.
Ethical AI
Ethical AI centers on the human impact of AI systems. It asks whether an AI-powered feature is fair, transparent, and aligned with societal norms and expectations, even when the law is silent.
This includes minimizing bias, avoiding discriminatory outcomes, respecting user autonomy, and being honest about how AI is used within a product.
For PMs, ethical AI requires moving beyond âcan we build this?â to âshould we build this?â And considering how decisions made during product development affect users, communities, and long-term trust.
Lawful AI
Lawful AI is about compliance. It ensures AI systems adhere to applicable laws, regulations, and industry standards across the regions your product operates.
This includes:
- Data protection and privacy laws
- Sector-specific regulations
- Emerging AI-specific legislation
Legal compliance is the baseline; itâs a non-negotiable foundation for AI governance. Product managers play a critical role by translating legal requirements into product requirements, workflows, and constraints that teams can actually implement.
Together, ethical, responsible, and lawful AI form the foundation of an effective AI governance framework: balancing what is legally required, ethically expected, and operationally sustainable for building AI-powered products at scale.

7 steps to build an AI governance framework
1. Define clear principles and risk tolerance
The first step in building an AI governance framework is to define your goals and risk tolerance. What do you want to achieve? And what level of risk tolerance is right for your organization?
For many product managers, their goal is to create responsible, ethical, and lawful AI products that build trust with customers and benefit the business. To accomplish this, itâs important to define data security, transparency, and bias prevention procedures within your org.
Why is defining this important?
Too often, people throw around terms like explainability or responsibility, and everyone nods along, but each person has a different definition in their head. That's a recipe for confusion.
You also need to recognize that AI isn't governed at the level of technology, but rather it's governed at the level of application. A recommendation engine for shopping is one thing. An AI system for medical diagnosis is another.
When it comes to risk tolerance, the biggest determinant is often industry. Health, finance, and legal firms are (understandably) under much stricter regulations than other industries.
Once youâve outlined the key principles and guidelines underpinning your AI usage policies and understand the risks, you can move on to establishing your compliance teams.

2. Establish ownership and accountability
AI governance isnât just the product teamâs responsibility; it takes the entire organization to own and maintain good AI practices. You should appoint several leaders to own the compliance within their department â these leaders may also become part of an AI compliance committee if needed.
Their responsibility will be to train their teams on the guidelines and ensure best practices are followed. Ideally, everyone within your organization will receive some AI training to ensure theyâre aware of what they can and canât do with AI.
This training should be tailored to each department or role, so those who work more closely with AI technology receive a more in-depth training on AI ethics and governance.
âIt's very important to understand that this is a multi-stakeholder problem. It's everybody's problem. It's not a data science manager problem. It's not a business leader problem... It's not even the auditor's problem... it's everybody's problem.â
â Nassim Tayari, Data and AI Platform Leader, IBM
Remember, ownership is important to ensure teams follow through with these requirements.
3. Inventory AI use cases and data flows
Once ownership and accountability are established, the next step is understanding where and how AI is actually being used across your organization.
Many companies adopt AI incrementally â through product features, internal tools, and third-party platforms â without ever forming a complete picture. Creating an inventory of AI use cases provides that visibility and becomes the foundation for risk-based governance.
This inventory should capture both:
- AI-enabled capabilities in your products, and
- Internal workflows using AI
PMs should document the purpose of each use case, whether it is user-facing or internal, and the level of automation involved. This helps teams distinguish between experimental use cases and those that carry real customer, regulatory, or reputational impact.
Equally important is mapping the data flows that power each AI system. PMs should work with engineering, data, and security teams to understand what data is being used, where it comes from, how it is processed, and where outputs are ultimately consumed.
This includes identifying personal or sensitive data, third-party data sources, and vendor dependencies that may introduce additional risk or compliance obligations.
With this visibility in place, organizations can begin to tier AI use cases based on risk rather than applying the same controls across the board. Not every AI initiative warrants the same level of scrutiny, and treating them as such can slow innovation without meaningfully improving safety.
As Andi McAleer, Head of Data & AI Governance at the Financial Times, explains: âWe can't afford to treat every AI initiative from a simple internal summarization tool to a customer-facing underwriting model with the same level of bureaucratic scrutiny.â
This inventory enables a more pragmatic approach to governance â one that focuses attention and resources where they matter most.
Or, as Andi puts it, âGovernance isn't about eliminating risk, it's about managing it in line with what the business is willing to tolerate.â

4. Embed governance into product development workflows
Once this AI use case inventory is completed, your product team can map these checks and regulations directly into the product development process. The aim is to surface risk early so governance supports better product decisions instead of blocking them.
In practice, this means tailoring governance expectations to the level of risk associated with each AI use case. Rather than applying the same controls everywhere, PMs should ensure that checks scale with potential impact.
This typically involves:
- Understanding the level of compliance and risk associated with each AI use case
- Applying the appropriate depth of governance checks based on that risk
- Defining clear criteria for when escalation or formal sign-off is required
- Involving legal, data, or AI governance teams early for higher-risk features
I know what youâre thinking: âBut wonât that slow down development?â Yes, it probably will, but hereâs Andiâs point of view:
âFriction isn't always bad. To use a really well-trodden example, ask a Formula 1 driver, âDo brakes slow you down, or do they allow you to control to achieve a greater speed through bends and turns in the road?â
âGovernance is a catalyst, not a constraint... control enables maximum speed and maximum maneuverability for innovation.â
Embedding governance into existing rituals helps keep expectations clear, so governance becomes a shared responsibility and growth opportunity rather than an unexpected hurdle late in development.

5. Implement data, model, and vendor controls
Strong AI governance depends on having the right controls in place across data, models, and third-party tools.
While PMs may not own the technical implementation, they play a critical role in ensuring these controls are defined, prioritized, and reflected in product requirements and vendor decisions.
These include:
- Data controls: Ensuring data used for AI features is appropriate, necessary, and compliant with privacy and security requirements.
- Model controls: Defining expectations for how models are evaluated, documented, and monitored over time. PMs should ensure teams test for performance, bias, and failure modes before launch and review model outputs regularly.
- Vendor and third-party controls: Assessing external AI tools and providers against internal standards for security, data handling, and compliance. Clarifing how data is used, who is accountable for failures, and how risks are managed.
Together, these controls help ensure AI systems behave as expected throughout their lifecycle. For PMs, implementing data, model, and vendor controls is about turning governance principles into concrete product constraints that protect users, the business, and long-term trust.

6. Ensure transparency and user communication
Transparency is a core part of building trust in AI-powered products. When users understand when AI is being used, what it is doing, and where its limitations lie, they are far more likely to engage with it confidently.
With that in mind, transparency should be treated as a product decision, not just a compliance requirement.
This means being clear about where AI influences recommendations, decisions, or outcomes that affect users. This doesnât require exposing technical details, but it does require setting the right expectations.
Users should not be surprised to learn that AI played a role in a meaningful interaction, particularly in higher-risk or customer-facing features.
It also means being honest about limitations. AI systems can be wrong, biased, or uncertain, and failing to communicate this can lead to over-reliance and loss of trust. Simple cues can help users interpret AI outputs more responsibly.
Finally, transparency should be paired with communication and recourse. Where appropriate, users should be able to provide feedback, question outcomes, or escalate issues.
These mechanisms reinforce accountability and signal that AI is designed to support users, not operate as an unchallengeable black box.
By embedding transparency and communication into the product experience, PMs can meet ethical and regulatory expectations while building AI features that feel understandable, fair, and worthy of user trust.

7. Monitor, audit, and evolve over time
Finally, AI governance doesnât end at launch. Models change, data drifts, user behavior evolves, and regulations continue to develop.
Treating governance as a one-time approval creates a false sense of safety and leaves organizations exposed as AI systems move from controlled environments into real-world use.
PMs should build monitoring and review into the ongoing lifecycle of AI-powered features. Teams should regularly assess whether models are behaving as expected, whether new risks have emerged, and whether the original assumptions behind an AI use case still hold true.
Auditing plays an important role in this process, particularly for higher-risk use cases. Periodic audits help validate compliance with internal policies and external regulations, uncover blind spots, and provide evidence of due diligence when questions arise from regulators, customers, or internal stakeholders.
Most importantly, AI governance frameworks must evolve alongside the organization. As products scale, regulations mature, and AI capabilities advance, governance policies and processes should be revisited and refined.
As Andi puts it:
âOur governance framework can't be a one-time approval stamp. It must be a dynamic system of checks that ensures compliance and safety throughout the entire lifecycle of an AI solution.â
Common AI governance challenges for product managers
Even with a structured AI governance framework in place, product managers continue to face challenges as AI systems scale, evolve, and interact with users. Some of the most common challenges include:
- Lack of visibility into AI usage: AI is often adopted incrementally through features, internal tools, and third-party platforms, making it difficult to track where and how it is being used across the organization.
- Unclear ownership and accountability: Responsibility for AI governance is frequently fragmented across product, legal, data, and engineering teams, leading to gaps in decision-making and follow-through.
- Balancing speed with safety: PMs are under pressure to ship AI-powered features quickly, while also ensuring ethical use, compliance, and risk mitigation.
- Translating regulations into product requirements: Laws and standards are complex, so product teams can struggle to turn legal obligations into actionable product constraints and workflows.
- Managing third-party and vendor risk: Relying on external AI tools and APIs introduces additional uncertainty around data handling, accountability, and long-term compliance.
- Maintaining trust as products scale: Even well-designed AI features can behave unpredictably over time, making it challenging to preserve user trust as models, data, and use cases evolve.
While these challenges persist, a well-defined AI governance framework gives product managers the tools to balance innovation, risk, and accountability as AI-powered products scale.
Quotes from Nassim and Andi were sourced from their sessions at AIAIâs Generative AI Summit Toronto 2024 and London 2025, respectively.






