Skip to content
AI Foundations for Bankers
0%

Building the AI Platform Team

intermediate10 min readteam-buildingtalentorganizationhiringskills

The Talent Reality

Every banking institution wants to build an AI team. Very few have a realistic plan for doing so.

The talent market for AI engineers, ML operations specialists, and data scientists is intensely competitive. Banks compete for the same talent pool as technology companies that can offer higher compensation, faster-paced work environments, and the prestige of working on cutting-edge consumer products. This is not a problem you solve by simply posting job listings and hoping for the best.

A successful AI team strategy for banking requires honesty about three things: what roles you actually need (not all of them), where you can realistically hire (not everywhere), and what you should build internally versus buy from vendors (not everything).

BANKING ANALOGY

Building an AI team is like building a capital markets desk. You need a few highly skilled traders (your ML engineers), support staff who understand the infrastructure (data engineers), product people who translate business needs into specifications (prompt engineers and AI product managers), and risk oversight (model validators). You do not need to hire every role on day one. You start with a small core team, build capability through experience, and scale as the business grows. And critically, you leverage your vendor relationships for specialized capabilities that do not make sense to build in-house.

The Core Roles

ML Engineer / AI Engineer

What they do: Build, deploy, and maintain AI systems. They select and configure foundation models, build RAG pipelines, implement guardrails, and manage the technical infrastructure.

Why you need them: Without ML engineering capability, your institution is entirely dependent on vendors and consultants for every AI decision. You need at least enough in-house expertise to evaluate vendor claims, make architecture decisions, and maintain production systems.

Hiring reality: This is the hardest role to fill. Experienced ML engineers command premium compensation. Consider hiring mid-level engineers with strong software engineering fundamentals and investing in AI-specific training.

Data Engineer

What they do: Build and maintain the data pipelines that feed AI systems. They handle data extraction, transformation, quality, and integration across the institution's data sources.

Why you need them: AI is only as good as the data it works with. Most AI project failures trace back to data quality, not model quality. Data engineers ensure that the right data, in the right format, at the right quality, reaches your AI systems.

Hiring reality: Easier to hire than ML engineers, and many banks already have data engineering talent. Upskilling existing data engineers with AI-specific skills (embedding pipelines, vector database management) is often the fastest path.

AI Product Manager

What they do: Translate business needs into AI product requirements. They define use cases, work with business stakeholders to prioritize features, measure outcomes, and ensure AI products solve real business problems.

Why you need them: Without a product manager, AI teams build technically impressive solutions that nobody uses. The product manager ensures every AI initiative is grounded in a business problem with measurable outcomes.

Hiring reality: Look for product managers with domain expertise in banking who are willing to learn AI concepts, rather than AI specialists who do not understand banking. Banking domain knowledge is harder to teach than AI concepts.

Prompt Engineer

What they do: Design, test, and optimize the prompts and system instructions that guide AI model behavior. They develop prompt templates, implement orchestration patterns, and tune AI interactions for quality and consistency.

Why you need them: The difference between a mediocre and excellent AI application often comes down to prompt engineering. Well-crafted prompts can dramatically improve output quality, reduce hallucinations, and ensure compliance with institutional standards.

Hiring reality: This is a new role with no established career path. Look for candidates with strong writing skills, analytical thinking, and systematic testing mindset -- backgrounds in technical writing, QA engineering, or even compliance are surprisingly well-suited.

Model Risk Analyst (AI-focused)

What they do: Validate AI models, test for bias and fairness, monitor ongoing performance, and ensure compliance with MRM frameworks.

Why you need them: Regulators expect independent model validation for all models, including AI. This role bridges your existing MRM function with AI-specific validation requirements.

Hiring reality: Extend your existing model risk team. Current model validators with quantitative backgrounds can learn AI validation methodologies more readily than AI engineers can learn banking regulation.

Team Models

Centralized: AI Center of Excellence

All AI talent sits in a single team that serves the entire institution.

Works when: The institution is in early stages (1-3 production use cases), AI talent is scarce and must be shared, and standardization is more important than speed.

Risks: Becomes a bottleneck as demand grows. Business units feel underserved. Projects are prioritized by the CoE rather than by business value.

Embedded: AI in Business Lines

AI talent is distributed across business lines, with each major division having its own AI engineers and data scientists.

Works when: The institution has mature AI capabilities, sufficient talent to distribute, and well-established governance standards that embedded teams follow.

Risks: Inconsistent standards, duplicated effort, difficulty attracting and retaining talent in smaller teams, governance gaps.

Hybrid: Hub-and-Spoke

A central AI platform team provides shared infrastructure, standards, and governance. Business line AI teams (or liaisons) drive use case identification and domain-specific implementation.

Works when: The institution has growing AI capabilities and wants to balance governance with business responsiveness. This is the most common model for banks at Stage 2-3 of the maturity model.

The hub provides: Shared AI infrastructure, model hosting, guardrails framework, governance standards, training and capability development, vendor management.

The spokes provide: Business domain expertise, use case identification, stakeholder management, domain-specific prompt engineering, outcome measurement.

The Build-vs-Buy Decision

Not every AI capability needs to be built in-house. A realistic assessment of what to build versus buy:

Build internally:

  • Proprietary RAG systems over your institution's data (competitive advantage from your data, not from the technology)
  • Prompt engineering and system instructions (encode your institutional knowledge and compliance requirements)
  • Integration with internal systems (nobody knows your systems like your team)
  • Governance and monitoring (cannot outsource regulatory accountability)

Buy from vendors:

  • Foundation models (do not train your own LLM -- use commercial APIs from OpenAI, Anthropic, Google, or AWS Bedrock)
  • AI infrastructure (use managed services for model hosting, vector databases, and orchestration)
  • Specialized tools (document parsing, PII detection, content safety -- buy proven solutions rather than building from scratch)
  • Initial consulting (bring in expertise for architecture design and first deployment, then build internal capability to maintain and extend)

Tip

Your first AI hire should be a senior engineer who can serve as a technical lead -- someone who can evaluate vendor offerings, make architecture decisions, and mentor junior team members. Do not start by hiring a team of junior engineers with no experienced leader. One strong senior hire provides more value than three junior hires in the early stages of an AI program.

Realistic Staffing Timeline

Months 1-3 (Exploring):

  • 1 senior AI engineer (technical lead)
  • 1 data engineer (may be repurposed from existing team)
  • 1 AI product manager (may be part-time, repurposed from existing PM role)

Months 3-9 (Experimenting):

  • Add 1-2 junior AI engineers
  • Add 1 prompt engineer
  • Assign a model risk analyst to AI validation (part-time from existing MRM team)

Months 9-18 (Scaling):

  • Grow to 5-8 AI engineers
  • Add dedicated prompt engineering capability
  • Full-time model risk analyst for AI
  • Consider hub-and-spoke model with business line liaisons

Quick Recap

  • Five core roles drive an AI team: ML engineer, data engineer, AI product manager, prompt engineer, and model risk analyst -- but you do not need all of them on day one
  • Start small and grow: begin with a senior technical lead plus repurposed data engineering and product management, then add roles as capabilities mature
  • The hub-and-spoke model balances governance and speed: central platform team for infrastructure and standards, business line liaisons for domain expertise and use case ownership
  • Build what differentiates, buy what commoditizes: build proprietary RAG and governance internally, buy foundation models and infrastructure from vendors
  • Banking domain expertise matters more than AI expertise: it is easier to teach AI concepts to a banking professional than to teach banking to an AI engineer

KNOWLEDGE CHECK

A mid-size bank is hiring its first dedicated AI team member. According to the framework, which hire should come first?

Why does the framework recommend that a bank's AI product manager should have banking domain expertise rather than AI expertise?

Under what circumstances should a banking institution build a custom AI capability internally rather than buying from a vendor?