Understand AI Orchestration
AI Orchestration is a core capability of the platform that empowers users to build, configure, deploy, and manage AI Agents efficiently.
As one of the three foundational pillars of the platform, alongside infrastructure and automation/integration, AI Orchestration streamlines the integration of artificial intelligence into complex enterprise environments. Its primary goal is to enable organizations to incorporate AI-driven decision-making and automate the transformation of unstructured data within their workflows.
By providing a unified environment for managing AI agents, AI Orchestration accelerates the adoption of AI technologies and makes sure they can be securely used across diverse business scenarios.
Main features
The main features of AI Orchestration include:
Agent configuration and flexibility
The configuration system for AI Agents is highly flexible and adapts to different use cases:
- The AI agent system lets you create and configure agents to suit different use cases and improve response speed.
- You can create agents from existing deployed agents or build them from scratch.
- Configuration options cover system prompts, user messages, large language model (LLM) selection, and tool integration.
- When you select a cloud provider, enter the required credentials and parameters, such as model ID and API keys.
Optimized deployment
- You can deploy agents in advance to reduce runtime delays, which is crucial for scenarios like chatbots where users expect fast responses.
- Publicly deployed agents can have configurable replica counts for scalability, improving availability and responsiveness.
Custom tools and integration
AI Orchestration lets clients extend agent capabilities.
- Tool development: Clients can create custom tools written in Python for integration with their own proprietary systems and APIs.
- Usage definition: Tools require defining a class with explicitly registered functions that agents can access. You must provide detailed descriptions of functions and parameters to guide the agent in correct invocation and avoid errors.
Knowledge bases and memory
Orchestration manages how agents access information and maintain conversational context.
- Knowledge base connections: Agents connect to different knowledge bases, including vector databases (such as Qdrant) and document sources (CSV, DOCX (Word documents), PDF, and web pages). Clients typically provide their own vector databases, either in the cloud or on-premises, and agents connect to them directly.
- Ad hoc knowledge: You can use file-based temporary databases (such as ChromaDB or Lansdb) for ad hoc knowledge during execution, useful for scanning and analyzing specific files during a run.
- Conversation continuity (memory): The agent's memory manages conversation context and enables dialogues with many interactions, which is fundamental for user experience. Without attached memory, the system treats each user message as a new session. Clients can share memory between different agent instances or runs using persistent storage, such as SQLite databases.
Local hosting of LLMs
The platform supports hosting open-source LLMs locally on GPU-enabled clusters.
- Control and compliance: This feature gives clients control over their AI models and cost management. This matters most for large enterprises in regulated industries that can't send financial or confidential data to public AI services.
- Standard access: Admins configure local models with base API URLs and optional API keys to make sure compatibility with OpenAI API specifications for seamless integration.
Transparency and auditing
Deployed agents provide transparency features.
- Audit sessions: Audit sessions record conversation details, tool usage, token counts, and response times, regardless of whether the interaction was via API or workflow. This lets you trace exactly what happened with the agent.
- API documentation: Each deployed agent exposes automatically generated API documentation (REST ‑Representational State Transfer- API interface), so different applications can call it.
In summary, AI Orchestration acts as the command center for the platform's artificial intelligence. It provides all the tools and environment necessary for clients to build, adapt, and manage their own AI agents securely and efficiently, whether using cloud or locally hosted models.
How to Create an AI Agent
A step-by-step guide to building and deploying AI agents in GlobalAI.
How to Manage Knowledge Bases
Learn how to add and configure knowledge bases for your AI agents in GlobalAI.
Understand Session Management and Memory
Learn how AI Agents retain conversation context, share memory across workflows, and audit session logs in GlobalAI.