OpenAI Function Calling, IBM Watsonx Orchestrate, Amazon AgentCore
The Platform-Native Approach
The orchestration frameworks we have covered so far -- LangChain, LangGraph, AutoGen -- are independent software layers that sit between your application and your LLM providers. But the major AI platform vendors have also built orchestration capabilities directly into their ecosystems. For banking executives, these platform-native options deserve evaluation because they often align with existing vendor relationships, procurement processes, and security certifications.
This unit covers three important vendor-native approaches: OpenAI's function calling, IBM watsonx Orchestrate, and Amazon AgentCore.
OpenAI Function Calling
What It Does
OpenAI's function calling is not a full orchestration frameworkOrchestration FrameworkSoftware that coordinates LLMs, tools, and data sources into complex workflows. Frameworks like LangChain and LangGraph manage prompt chains, memory, and tool calling for multi-step AI tasks.See glossary -- it is a model-level capability that allows GPT models to generate structured calls to functions you define. Instead of the model producing free-text that your application must parse, function calling lets the model output a structured JSON object specifying which function to call and with what arguments.
This sounds subtle, but it is a significant architectural capability. It means the LLM itself decides when to use tools, which tool to use, and what parameters to pass -- reliably and in a machine-readable format.
How It Works
You define functions using JSON Schema and include them in your APIAPI (Application Programming Interface)A standardized interface that allows software systems to communicate. In AI, APIs let your applications send prompts to a model and receive generated responses programmatically.See glossary request. When the model determines that a function call would help answer the user's question, it returns a structured function call instead of (or alongside) a text response. Your application executes the function, passes the result back to the model, and the model incorporates the result into its response.
For example, a banking assistant might have access to functions like:
get_account_balance(account_id)-- query the core banking systemlookup_policy(topic, section)-- search the compliance policy databasecalculate_dti_ratio(income, obligations)-- run a debt-to-income calculation
The model decides which functions are relevant to the user's question and calls them in the appropriate sequence.
Banking Relevance
Function calling is the foundation for building AI agentsAgentsAI systems that can autonomously plan and execute multi-step tasks by calling tools, querying data sources, and making decisions without human intervention at each step.See glossary with OpenAI's models. For banks using GPT-4 through Azure OpenAI Service, function calling provides a clean, well-supported mechanism for connecting the model to internal systems. It is simpler than adopting a full orchestration framework when your use cases are tool-calling focused.
BANKING ANALOGY
Think of function calling like the structured message formats used in payment processing. When your bank sends a wire transfer, you do not write a free-text letter to the receiving bank describing the transaction. Instead, you use a structured format (like a SWIFT MT103) with specific fields for amount, currency, beneficiary, and purpose. Function calling does the same thing for AI -- instead of the model producing unstructured text that your systems must interpret, it produces structured messages that your systems can process directly and reliably.
IBM Watsonx Orchestrate
Enterprise AI for Regulated Industries
IBM has a decades-long presence in banking technology, and watsonx Orchestrate reflects that heritage. It is an enterprise-grade platform for building AI assistants and automation workflows, designed specifically for large organizations with strict governance requirements.
Key Capabilities
Pre-built skill catalog. Watsonx Orchestrate comes with a library of pre-built "skills" -- integrations with common enterprise systems like Salesforce, SAP, ServiceNow, and Microsoft 365. For banks, this means less custom integration development for common workflows.
Conversational orchestration. Users interact with watsonx Orchestrate through natural language, and the platform routes requests to the appropriate skills, chains them together, and manages the workflow. This conversational interface makes it accessible to business users, not just developers.
Governance and compliance. IBM has built governance features into the platform from the ground up: audit trails, access controls, model monitoring, and explainability. For banks operating under OCC SR 11-7 model risk management requirements, these built-in governance capabilities reduce the compliance burden.
Hybrid cloud support. IBM supports deployment across public cloud, private cloud, and on-premises infrastructure -- critical for banks with data residency requirements or those that cannot host certain workloads in public cloud environments.
Banking Considerations
IBM's existing relationships with many large banks -- through core banking, mainframe, and consulting engagements -- make watsonx Orchestrate a natural conversation in many institutions. The platform's emphasis on governance and compliance alignment resonates with banking risk and compliance functions. However, the trade-off is typically less flexibility and innovation speed compared to open-source alternatives.
Amazon AgentCore
Managed Agent Infrastructure
Amazon AgentCore, part of the broader Amazon Bedrock ecosystem, provides managed infrastructure for building and running AI agents. Rather than building agent orchestration from scratch, AgentCore handles the infrastructure layer -- session management, memory, tool execution, and scaling -- so your team focuses on business logic.
Key Capabilities
Managed agent runtime. AgentCore provides a fully managed environment for running agents, handling the operational complexity of scaling, session management, and inferenceInferenceThe process of running a trained model to generate predictions or outputs from new input data. Inference cost, latency, and throughput are key factors in enterprise AI deployment.See glossary optimization. For banking operations teams, this reduces the operational burden of maintaining AI infrastructure.
Knowledge bases integration. AgentCore connects to Amazon Bedrock Knowledge Bases, which provide managed RAG infrastructure including document ingestion, chunking, embedding, and vector storage. This end-to-end managed pipeline simplifies the path from raw documents to AI-accessible knowledge.
Multi-model support. Through Amazon Bedrock, AgentCore supports models from multiple providers -- Anthropic Claude, Meta Llama, Amazon Titan, and others -- giving banks flexibility to choose the best model for each use case without vendor lock-in on the model layer.
AWS security integration. AgentCore integrates with AWS Identity and Access Management (IAM), VPC networking, and AWS CloudTrail logging. For banks already running workloads on AWS, this means AI agents inherit the same security controls applied to all other workloads.
Banking Considerations
AWS has significant market share in banking cloud infrastructure. For institutions already running on AWS, AgentCore provides a natural extension of their existing cloud investment with consistent security, compliance, and operational models. The managed nature reduces the specialized AI engineering talent required, which can be advantageous for banks competing for scarce AI talent.
Tip
When evaluating these vendor-native options, start with your existing cloud and vendor relationships. If your bank already has enterprise agreements, security certifications, and operational expertise with a specific vendor, the integration and operational advantages of that vendor's orchestration tools often outweigh the technical feature advantages of independent frameworks. The best framework is the one your team can operate securely and reliably in production.
Comparing the Vendor Approaches
| Dimension | OpenAI Function Calling | IBM Watsonx Orchestrate | Amazon AgentCore |
|---|---|---|---|
| Type | Model capability | Full platform | Managed infrastructure |
| Complexity | Low -- single API feature | High -- enterprise platform | Medium -- managed service |
| Best for | Tool calling with GPT models | Governance-heavy enterprise workflows | AWS-native agent deployment |
| Integration | Azure OpenAI, direct API | IBM ecosystem, hybrid cloud | AWS Bedrock, IAM, CloudTrail |
| Governance | Build your own | Built-in, enterprise-grade | AWS security model |
| Flexibility | High (minimal abstraction) | Lower (platform-defined patterns) | Medium (within AWS ecosystem) |
Quick Recap
- OpenAI function calling provides structured tool integration at the model level -- simple, powerful, but not a full orchestration framework
- IBM watsonx Orchestrate offers enterprise-grade orchestration with built-in governance, ideal for banks with existing IBM relationships
- Amazon AgentCore provides managed agent infrastructure within the AWS ecosystem, reducing operational burden
- Existing cloud vendor relationships and security certifications often outweigh pure technical feature comparisons
- Each approach involves trade-offs between flexibility, governance, operational burden, and ecosystem lock-in
KNOWLEDGE CHECK
What distinguishes OpenAI function calling from a full orchestration framework like LangChain?
A heavily regulated bank with strict data residency requirements needs AI orchestration that can run on-premises. Which option best addresses this requirement?
What is the most pragmatic consideration when a bank is choosing between these vendor-native orchestration options?