Skip to main content

Understand API Reference

In GlobalAI AI Orchestration, the API reference serves as a comprehensive guide to the endpoints, methods, and data structures available for interacting with the platform. This section provides detailed information on how to use the API for your AI orchestration needs.

API reference overview

Discover how to leverage the unified API to seamlessly integrate agents, models, and orchestration workflows. Access detailed endpoint documentation, usage examples, and best practices to optimize your AI solutions.

API reference

The API reference has three sections that contain the code snippets and examples to help you get started.

OpenAI Python SDK

The OpenAI Python SDK section provides detailed documentation on how to use the OpenAI Python library to interact with the AI Orchestration platform. It includes information on available classes, methods, and parameters, along with code examples to help you build different functionalities. Here's an example of how to create a chat completion using the OpenAI Python SDK with a custom base web address:

import openai
client = openai.OpenAI(
api_key="your_api_key",
base_url="<your_proxy_base_url>"
)

response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
]
)

print(response)

LlamaIndex

The LlamaIndex section offers guidance on how to use the LlamaIndex library to build and manage AI orchestration workflows. It includes examples of how to create indices, query data, and integrate with different AI models. Below is an example of how to set up a LlamaIndex query:

import os, dotenv

from llama_index.llms import AzureOpenAI
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

llm = AzureOpenAI(
engine="azure-gpt-3.5",
temperature=0.0,
azure_endpoint="<your_proxy_base_url>",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

embed_model = AzureOpenAIEmbedding(
deployment_name="azure-embedding-model",
azure_endpoint="<your_proxy_base_url>",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

documents = SimpleDirectoryReader("llama_index_data").load_data()
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

LangChain Py

The LangChain Py section details how to use the LangChain library for building AI orchestration applications. It includes instructions on setting up chains, agents, and integrating with different AI models. Below is an example of how to create a LangChain agent:

import os, dotenv

from llama_index.llms import AzureOpenAI
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

llm = AzureOpenAI(
engine="azure-gpt-3.5",
temperature=0.0,
azure_endpoint="<your_proxy_base_url>",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

embed_model = AzureOpenAIEmbedding(
deployment_name="azure-embedding-model",
azure_endpoint="<your_proxy_base_url>",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

documents = SimpleDirectoryReader("llama_index_data").load_data()
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
Agent examples

This examples demonstrates how to set up a basic LangChain agent that can process user input and generate responses using the specified AI model. By utilizing the API reference, developers can efficiently build and manage AI orchestration workflows, ensuring seamless integration with the GlobalAI AI Orchestration platform.