What Orchestration Frameworks Actually Do
Beyond the Single API Call
If you have been following this course, you now understand what Large Language Models can do -- summarize documents, answer questions, generate drafts. But here is the reality that every banking technology leader discovers quickly: calling an LLM API once and getting a response is the easy part. The hard part is building workflows where the model reasons across multiple steps, accesses your internal systems, remembers context from earlier in the conversation, and knows when to hand off to a human.
This is exactly what orchestration frameworksOrchestration FrameworkSoftware that coordinates LLMs, tools, and data sources into complex workflows. Frameworks like LangChain and LangGraph manage prompt chains, memory, and tool calling for multi-step AI tasks.See glossary solve. They are the middleware layer that turns a raw language model into a functioning enterprise application.
KEY TERM
Orchestration Framework: Software that coordinates LLMs, tools, data sources, and decision logic into multi-step workflows. Rather than making individual API calls, an orchestration framework manages the entire flow -- from initial prompt to final output -- including intermediate reasoning, tool calls, error handling, and memory management.
Why Raw API Calls Fall Short
Consider a seemingly simple banking use case: a relationship manager asks an AI assistant, "What is our current exposure to the commercial real estate sector, and how does it compare to our board-approved concentration limits?"
Answering this question with raw APIAPI (Application Programming Interface)A standardized interface that allows software systems to communicate. In AI, APIs let your applications send prompts to a model and receive generated responses programmatically.See glossary calls would require your engineering team to manually code every step:
- Parse the question to understand the intent
- Query the loan portfolio database for CRE exposure figures
- Query the policy database for concentration limit thresholds
- Send both datasets plus the original question to the LLM
- Handle the case where the LLM needs clarification
- Format the response appropriately
- Log the interaction for audit purposes
Each step requires custom code, error handling, retry logic, and state management. Multiply this by dozens of use cases, and you have an unmaintainable codebase. An orchestration framework provides the scaffolding to build these workflows without reinventing the plumbing every time.
BANKING ANALOGY
Think of an orchestration framework as the workflow engine in a loan origination system. When a commercial loan application arrives, the origination system does not simply pass it to a single underwriter and wait. It routes the application through a defined workflow: credit analysis, collateral valuation, legal review, compliance checks, approval authority -- each step handled by the right specialist at the right time, with the system tracking status, enforcing sequencing rules, and managing handoffs. An orchestration framework does the same thing for AI workflows: it routes tasks to the right models and tools, manages the sequence of operations, and tracks the state of each step.
Core Capabilities
Every orchestration framework, regardless of vendor, provides some combination of these four capabilities:
Chaining
Chaining connects multiple LLM calls into a sequence where the output of one step becomes the input to the next. A document review chain might first extract key terms, then classify the document type, then summarize relevant sections, and finally generate a risk assessment -- all as a single orchestrated workflow.
Memory
LLMsLarge Language Model (LLM)A neural network trained on vast amounts of text data that can understand and generate human language. LLMs power chatbots, document analysis, code generation, and many enterprise AI applications.See glossary are stateless by default -- each API call starts from scratch. Orchestration frameworks add memory so that conversations and workflows can reference earlier context. For banking, this means a compliance chatbot can remember that you asked about BSA/AML policy five minutes ago and provide follow-up answers in that context.
Tool Integration
Tool integration allows the LLM to call external systems -- databases, APIs, calculation engines, document repositories -- as part of its reasoning process. Instead of relying solely on its training data, the model can look up real-time portfolio data, run risk calculations, or query your document management system.
Agent Reasoning
The most advanced capability is agentAgentsAI systems that can autonomously plan and execute multi-step tasks by calling tools, querying data sources, and making decisions without human intervention at each step.See glossary reasoning, where the framework gives the LLM autonomy to decide which tools to call and in what order. Rather than following a rigid predefined chain, an agent can dynamically determine the best path to answer a question -- similar to how an experienced analyst decides which data sources to consult based on the specific question asked.
How Orchestration Fits the AI Stack
Orchestration frameworks sit in the middle of the AI technology stack:
- Above: Your application layer (chatbots, dashboards, internal tools)
- Below: Foundation models (GPT-4, Claude, Gemini) and data infrastructure (vector databases, document stores)
The framework is the glue. It takes user intent from the application layer, coordinates calls to models and data sources, manages state and memory, and returns structured results back to the application. Without this layer, your application code becomes deeply entangled with model-specific API details, making it fragile and difficult to maintain.
What This Means for Banking
Banking workflows are inherently multi-step, involve multiple data sources, and require audit trails. These characteristics make orchestration frameworks particularly valuable:
- Multi-step compliance checks require querying policies, comparing against current state, and generating reports
- Credit decisioning support involves pulling financial data, running models, and synthesizing recommendations
- Customer service automation needs access to account data, product information, and policy documentation simultaneously
- Regulatory reporting demands data aggregation, validation, transformation, and formatting across multiple systems
The frameworks covered in the next five units each take a different approach to solving these challenges. Understanding their trade-offs will help you make informed build-versus-buy decisions for your institution.
Quick Recap
- Orchestration frameworks are middleware that connects LLMs to tools, data, and multi-step workflows
- Raw API calls cannot scale to enterprise complexity -- you need chaining, memory, tool integration, and agent reasoning
- These frameworks sit between your applications and your foundation models, providing the coordination layer
- Banking workflows are naturally multi-step and multi-source, making orchestration frameworks essential infrastructure
KNOWLEDGE CHECK
What is the primary problem that orchestration frameworks solve for enterprise AI deployments?
Which orchestration framework capability allows an LLM to dynamically decide which tools to call and in what order?
Why are orchestration frameworks particularly valuable for banking compared to other industries?