The Governance Operating Model
Governance Must Fit Your Existing Structure
The most common mistake banks make when building AI governance is creating a parallel structure that operates alongside -- but disconnected from -- their existing risk management framework. AI governance does not need a new organizational hierarchy. It needs to extend the structures you already have.
Your institution already has a three-lines-of-defense model. You already have a model risk managementModel Risk ManagementThe regulatory framework (OCC SR 11-7) governing how banks validate, monitor, and control AI models. Ensures models perform as expected and risks are identified and mitigated.See glossary framework. You already have vendor management, data governance, and compliance functions. AI governance should integrate into these existing structures, not create competition with them.
BANKING ANALOGY
Building AI governance on top of your existing framework is like adding a new asset class to your investment portfolio. When banks started trading derivatives, they did not create an entirely separate risk management function. They extended their existing market risk frameworks to accommodate the new instruments -- adding derivatives-specific expertise, updating risk models, and adjusting limits, but all within the established governance structure. AI governance should follow the same pattern: new capabilities and expertise, same foundational structure.
The Governance Operating Model
Organizational Approaches
Banks typically choose between two governance models, or a hybrid of both:
Centralized: AI Center of Excellence (CoE)
A dedicated team with enterprise-wide responsibility for AI strategy, standards, and oversight. The CoE typically owns:
- AI technology standards and approved tool/model inventory
- Governance policies and deployment approval processes
- Shared AI infrastructure and platform services
- Training and capability development across the institution
Advantages: Consistent standards, economies of scale, strong governance. Disadvantages: Can become a bottleneck, may be disconnected from business line needs, slower response to opportunities.
Federated: Business Line Ownership
Each business line owns its AI initiatives, with lightweight central coordination for standards and governance:
- Business lines identify use cases and drive implementation
- A small central team sets minimum standards and facilitates knowledge sharing
- Risk management and compliance provide oversight within their existing mandates
Advantages: Faster innovation, closer to business needs, stronger ownership. Disadvantages: Inconsistent standards, duplicated effort, harder to maintain governance coverage.
Hybrid: Hub-and-Spoke
The most common model in practice combines centralized standards with federated execution:
- Central AI team sets standards, manages shared infrastructure, and coordinates governance
- Business line AI teams drive use case identification and implementation
- GuardrailsGuardrailsSafety mechanisms that constrain AI model outputs to prevent harmful, off-topic, or non-compliant responses. Critical in banking for regulatory adherence and brand safety.See glossary and policies are set centrally; execution decisions are made locally
Key Governance Roles
Effective AI governance requires clear role definition. The following roles may be new positions or added responsibilities for existing roles, depending on the institution's size and AI maturity.
| Role | Responsibility | Reports To | Key Activities |
|---|---|---|---|
| AI Executive Sponsor | Strategic accountability for AI outcomes | CEO / Board | Sets AI strategy, secures funding, board reporting |
| AI Program Lead | Day-to-day AI program management | AI Executive Sponsor | Coordinates initiatives, tracks portfolio, manages roadmap |
| AI Model Owner | Accountable for a specific AI deployment | Business Line Leader | Documents use case, monitors performance, owns outcomes |
| Model Validator | Independent validation of AI models | Chief Risk Officer | Tests fairness, accuracy, robustness; issues validation opinions |
| Data Steward | Governs data flowing into AI systems | Chief Data Officer | Data classification, quality, residency compliance |
| AI Ethics Reviewer | Evaluates ethical implications | Compliance / Legal | Assesses broader societal impact, customer fairness |
| AI Platform Engineer | Builds and maintains AI infrastructure | CTO / CIO | Model hosting, guardrails implementation, monitoring |
| Prompt Engineer | Designs and optimizes AI interactions | AI Program Lead | System instructions, prompt design, output quality |
KEY TERM
AI Model Inventory: A comprehensive, maintained registry of every AI system deployed or under development within the institution. The model inventory includes the model's purpose, owner, risk tier, data sources, validation status, and deployment environment. Regulators expect banks to maintain model inventories for all models -- including AI -- and to demonstrate that every model in production has been through appropriate governance processes.
The Policy Framework
AI governance requires a layered policy framework that ranges from high-level principles to detailed operational procedures.
Tier 1: AI Principles and Strategy
Board-approved principles that establish the institution's approach to AI:
- What outcomes the institution expects from AI
- What ethical principles guide AI deployment
- What risk appetite the institution has for AI initiatives
- How AI governance relates to existing enterprise risk management
Tier 2: AI Acceptable Use Policy
Enterprise-wide policy that defines what is and is not permitted:
- Which AI tools and platforms are approved for use
- What data classifications can be processed by AI systems
- What customer-facing AI interactions require human oversight
- What constitutes prohibited AI use (autonomous lending decisions without human review, use of customer data in unapproved AI tools)
Tier 3: AI Deployment Standards
Technical and operational standards for AI systems moving into production:
- Minimum testing requirements by risk tier
- Documentation requirements (use case definition, data lineage, prompt design)
- Performance monitoring requirements
- Incident response procedures
Tier 4: Operational Procedures
Detailed procedures for day-to-day governance activities:
- How to submit a new AI use case for approval
- How to request access to AI tools and data
- How to report AI incidents or unexpected behavior
- How to conduct periodic model reviews
The Approval Workflow
Every AI deployment should go through a governance approval process proportionate to its risk level. A practical three-gate model:
Gate 1: Use Case Approval
- Business sponsor presents the proposed use case, expected value, and risk assessment
- Data governance team confirms data classification and handling requirements
- AI Program Lead confirms alignment with enterprise AI strategy
- Decision: Proceed to development, modify scope, or decline
Gate 2: Pre-Deployment Validation
- Model validation team conducts independent testing (fairness, accuracy, robustness)
- Compliance reviews customer-facing elements (disclosures, disclaimers, adverse action capability)
- Information security validates the deployment environment and access controls
- Decision: Approve for production, remediate findings, or stop
Gate 3: Post-Deployment Review
- Conducted 30-90 days after deployment and annually thereafter
- Model owner presents performance metrics, user feedback, and incident log
- Model validator reviews ongoing monitoring data for drift, bias, or degradation
- Decision: Continue, enhance, reduce scope, or retire
The AI Model Inventory in Practice
The model inventory is the backbone of AI governance. Without knowing what AI systems are deployed, effective governance is impossible.
A practical model inventory for AI should capture:
- Model identification: Name, unique ID, version, vendor (if third-party)
- Business context: Use case description, business sponsor, model owner
- Risk classification: Tier level (1-3), customer impact, regulatory sensitivity
- Technical details: Foundation modelFoundation ModelA large AI model trained on broad data that can be adapted to many tasks. Examples include GPT-4, Claude, and Gemini. Banks evaluate these for capabilities, safety, and regulatory fit.See glossary used, deployment environment, data sources
- Governance status: Approval date, last validation date, next review date, open findings
- Performance metrics: Key performance indicators, monitoring frequency, escalation thresholds
The inventory must also capture shadow AI -- instances where employees use unapproved AI tools for work purposes. This requires a combination of technology controls (network monitoring, DLP tools) and cultural approaches (making approved tools accessible and useful enough that shadow AI becomes unnecessary).
Warning
The biggest governance gap at most banking institutions is not the absence of AI policies -- it is the gap between policy and practice. Policies exist on paper, but AI tools are being used without going through the approval workflow, model inventories are incomplete, and monitoring is sporadic. Effective governance requires enforcement mechanisms: technology controls that prevent unapproved AI use, audit processes that verify compliance, and consequences for policy violations. A governance framework that exists only in a policy manual is not governance -- it is theater.
Incident Response for AI
AI systems will fail -- outputs will be wrong, guardrails will be bypassed, models will behave unexpectedly. Institutions need an AI-specific incident response capability:
- Detection: Automated monitoring, user reporting, compliance review
- Assessment: Severity classification (customer impact, regulatory implications, scope)
- Containment: Ability to immediately disable or restrict an AI system
- Investigation: Root cause analysis -- was it a model issue, data issue, prompt issue, or guardrail failure?
- Remediation: Fix the issue, re-validate, and update governance documentation
- Reporting: Internal escalation, regulatory notification if required, lessons learned
Tip
Build your AI incident response process as an extension of your existing technology incident management framework. Most banks already have mature incident response for system outages, data breaches, and security events. Add an "AI incident" category with AI-specific assessment criteria and escalation paths. This is faster and more effective than building a separate AI incident process from scratch.
Quick Recap
- AI governance must integrate with existing frameworks: the three-lines-of-defense model, model risk management, and vendor oversight already provide the structure -- extend them rather than building parallel governance
- The hub-and-spoke model balances governance and speed: centralized standards and policies with federated execution in business lines is the most common and effective approach
- Eight key roles drive AI governance: from executive sponsor to prompt engineer, each role has distinct accountability -- define them clearly even if they overlap with existing positions
- A three-gate approval workflow ensures proportionate governance: use case approval, pre-deployment validation, and post-deployment review -- each gate proportionate to the AI system's risk tier
- The model inventory is the backbone: if you do not know what AI systems are deployed (including shadow AI), effective governance is impossible
KNOWLEDGE CHECK
A mid-size bank is establishing its AI governance structure. The CTO wants a centralized AI Center of Excellence, while business line leaders argue for federated ownership. Based on industry best practice, which approach is most likely to succeed?
Which of the following is the MOST critical gap in AI governance at most banking institutions today?
At what point in the AI deployment lifecycle should the model inventory entry be created for a new AI use case?