A Python client for interacting with UiPath's LLM services. This package provides both a low-level HTTP client and framework-specific integrations (LangChain, LlamaIndex) for accessing LLMs through UiPath's infrastructure.
This repository is organized as a monorepo with the following packages:
uipath_llm_client(root): Core HTTP client with authentication, retry logic, and request handlinguipath_langchain_client(packages/): LangChain-compatible chat models and embeddingsuipath_llamaindex_client(packages/): LlamaIndex-compatible integrations
The client supports two UiPath backends:
| Backend | Description | Default |
|---|---|---|
| AgentHub | UiPath's AgentHub infrastructure with automatic CLI-based authentication | Yes |
| LLMGateway | UiPath's LLM Gateway with S2S authentication | No |
| Provider | Chat Models | Embeddings | Vendor Type |
|---|---|---|---|
| OpenAI/Azure | GPT-4o, GPT-4, etc. | text-embedding-3-large/small | openai |
| Gemini 2.5, Gemini 2.0, etc. | text-embedding-004 | vertexai |
|
| Anthropic | Claude Sonnet 4.5, etc. | - | awsbedrock, vertexai |
| AWS Bedrock | Claude, Titan, etc. | Titan Embeddings, etc. | awsbedrock |
| Fireworks AI | Various open-source models | Various | openai |
| Azure AI | Various Azure AI models | Various | azure |
# Base installation (core client only)
pip install uipath-llm-client
# With optional provider extras for passthrough mode
pip install "uipath-llm-client[openai]" # OpenAI/Azure OpenAI models
pip install "uipath-llm-client[google]" # Google Gemini models
pip install "uipath-llm-client[anthropic]" # Anthropic Claude models
pip install "uipath-llm-client[all]" # All of the aboveFor LangChain support, use the separate package: pip install uipath-langchain-client.
- Add the custom index to your
pyproject.toml:
[[tool.uv.index]]
name = "uipath"
url = "https://uipath.pkgs.visualstudio.com/_packaging/ml-packages/pypi/simple/"
publish-url = "https://uipath.pkgs.visualstudio.com/_packaging/ml-packages/pypi/upload/"- Install the packages:
# Core client
uv add uipath-llm-client
# LangChain integration with all providers
uv add "uipath-langchain-client[all]"The Platform backend uses the UiPath CLI for authentication. Both "agenthub" (default) and "orchestrator" share the same settings — the EndpointManager selects the correct URL paths automatically.
# Authenticate via CLI (populates .uipath/.auth.json)
uv run uipath auth login
# Or set environment variables directly
export UIPATH_URL="https://cloud.uipath.com/org/tenant"
export UIPATH_ORGANIZATION_ID="your-org-id"
export UIPATH_TENANT_ID="your-tenant-id"
export UIPATH_ACCESS_TOKEN="your-access-token"
# Optional: select backend (default: "agenthub")
export UIPATH_LLM_SERVICE="agenthub" # or "orchestrator"To use the LLMGateway backend, set the following environment variables:
# Select the backend
export UIPATH_LLM_SERVICE="llmgateway"
# Required configuration
export LLMGW_URL="https://your-llmgw-url.com"
export LLMGW_SEMANTIC_ORG_ID="your-org-id"
export LLMGW_SEMANTIC_TENANT_ID="your-tenant-id"
export LLMGW_REQUESTING_PRODUCT="your-product-name"
export LLMGW_REQUESTING_FEATURE="your-feature-name"
# Authentication (choose one)
export LLMGW_ACCESS_TOKEN="your-access-token"
# OR for S2S authentication:
export LLMGW_CLIENT_ID="your-client-id"
export LLMGW_CLIENT_SECRET="your-client-secret"
# Optional tracking
export LLMGW_SEMANTIC_USER_ID="your-user-id"Configuration settings for UiPath Platform client requests. PlatformSettings is a unified settings class that serves both AgentHub and Orchestrator backends — the EndpointManager transparently selects the correct endpoints based on service availability.
You choose between them via the UIPATH_LLM_SERVICE environment variable (or the backend parameter in get_default_client_settings()):
| Value | Description |
|---|---|
"agenthub" (default) |
Routes requests through AgentHub endpoints |
"orchestrator" |
Routes requests through Orchestrator endpoints |
Both values create a PlatformSettings instance — the difference is in how EndpointManager resolves the URL paths.
from uipath.llm_client.settings import get_default_client_settings, PlatformSettings
# Option 1: Factory (reads UIPATH_LLM_SERVICE, defaults to "agenthub")
settings = get_default_client_settings()
# Option 2: Explicit backend
settings = get_default_client_settings(backend="agenthub")
settings = get_default_client_settings(backend="orchestrator")
# Option 3: Direct instantiation
settings = PlatformSettings()AgentHub is the default backend. It routes LLM requests through UiPath's AgentHub service, which provides model discovery, routing, and tracing capabilities.
# Select the backend (default, can be omitted)
export UIPATH_LLM_SERVICE="agenthub"
# Core settings (populated automatically by `uipath auth login`)
export UIPATH_URL="https://cloud.uipath.com/org/tenant"
export UIPATH_ORGANIZATION_ID="your-org-id"
export UIPATH_TENANT_ID="your-tenant-id"
export UIPATH_ACCESS_TOKEN="your-access-token"
# Optional: AgentHub configuration for discovery (default: "agentsruntime")
export UIPATH_AGENTHUB_CONFIG="agentsruntime"
# Optional: tracing
export UIPATH_PROCESS_KEY="your-process-key"
export UIPATH_JOB_KEY="your-job-key"Orchestrator uses the same PlatformSettings and authentication as AgentHub, but routes requests through Orchestrator endpoints instead.
# Select the backend
export UIPATH_LLM_SERVICE="orchestrator"
# Core settings (same as AgentHub)
export UIPATH_URL="https://cloud.uipath.com/org/tenant"
export UIPATH_ORGANIZATION_ID="your-org-id"
export UIPATH_TENANT_ID="your-tenant-id"
export UIPATH_ACCESS_TOKEN="your-access-token"
# Optional: tracing
export UIPATH_PROCESS_KEY="your-process-key"
export UIPATH_JOB_KEY="your-job-key"| Attribute | Environment Variable | Type | Default | Description |
|---|---|---|---|---|
access_token |
UIPATH_ACCESS_TOKEN |
SecretStr | None |
None |
Access token for authentication (populated by uipath auth login) |
base_url |
UIPATH_URL |
str | None |
None |
Base URL of the UiPath Platform API |
tenant_id |
UIPATH_TENANT_ID |
str | None |
None |
Tenant ID for request routing |
organization_id |
UIPATH_ORGANIZATION_ID |
str | None |
None |
Organization ID for request routing |
agenthub_config |
UIPATH_AGENTHUB_CONFIG |
str | None |
"agentsruntime" |
AgentHub configuration for discovery |
process_key |
UIPATH_PROCESS_KEY |
str | None |
None |
Process key for tracing |
job_key |
UIPATH_JOB_KEY |
str | None |
None |
Job key for tracing |
Authentication behavior:
- All four core fields (
access_token,base_url,tenant_id,organization_id) are required - Run
uipath auth loginto populate them automatically via the UiPath CLI - The access token is validated against the local
.uipath/.auth.jsonfile - Token refresh is handled automatically using the refresh token from the auth file
Configuration settings for LLM Gateway client requests. These settings control routing, authentication, and tracking for requests to LLM Gateway.
from uipath.llm_client.settings import LLMGatewaySettings
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="your-product",
requesting_feature="your-feature",
client_id="your-client-id", # For S2S auth
client_secret="your-client-secret", # For S2S auth
)| Attribute | Environment Variable | Type | Required | Description |
|---|---|---|---|---|
base_url |
LLMGW_URL |
str |
Yes | Base URL of the LLM Gateway |
org_id |
LLMGW_SEMANTIC_ORG_ID |
str |
Yes | Organization ID for request routing |
tenant_id |
LLMGW_SEMANTIC_TENANT_ID |
str |
Yes | Tenant ID for request routing |
requesting_product |
LLMGW_REQUESTING_PRODUCT |
str |
Yes | Product name making the request (for tracking) |
requesting_feature |
LLMGW_REQUESTING_FEATURE |
str |
Yes | Feature name making the request (for tracking) |
access_token |
LLMGW_ACCESS_TOKEN |
SecretStr | None |
Conditional | Access token for authentication |
client_id |
LLMGW_CLIENT_ID |
SecretStr | None |
Conditional | Client ID for S2S authentication |
client_secret |
LLMGW_CLIENT_SECRET |
SecretStr | None |
Conditional | Client secret for S2S authentication |
user_id |
LLMGW_SEMANTIC_USER_ID |
str | None |
No | User ID for tracking and billing |
action_id |
LLMGW_ACTION_ID |
str | None |
No | Action ID for tracking |
operation_code |
LLMGW_OPERATION_CODE |
str | None |
No | Operation code to identify BYO models |
additional_headers |
LLMGW_ADDITIONAL_HEADERS |
Mapping[str, str] |
No | Additional custom headers to include in requests |
Authentication behavior:
- Either
access_tokenOR bothclient_idandclient_secretmust be provided - S2S authentication uses
client_id/client_secretto obtain tokens automatically
The simplest way to get started - settings are automatically loaded from environment variables:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
# No settings needed - uses defaults from environment (AgentHub backend)
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
response = chat.invoke("What is the capital of France?")
print(response.content)from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.anthropic.chat_models import UiPathChatAnthropic
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
# OpenAI/Azure models
openai_chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
response = openai_chat.invoke("Hello!")
print(response.content)
# Google Gemini models
gemini_chat = UiPathChatGoogleGenerativeAI(model="gemini-2.5-flash")
response = gemini_chat.invoke("Hello!")
print(response.content)
# Anthropic Claude models (via AWS Bedrock)
claude_chat = UiPathChatAnthropic(model="anthropic.claude-sonnet-4-5-20250929-v1:0", vendor_type="awsbedrock")
response = claude_chat.invoke("Hello!")
print(response.content)
# Embeddings
embeddings = UiPathAzureOpenAIEmbeddings(model="text-embedding-3-large")
vectors = embeddings.embed_documents(["Hello world", "How are you?"])
print(f"Generated {len(vectors)} embeddings of dimension {len(vectors[0])}")Factory functions automatically detect the model vendor but require settings to be passed:
from uipath_langchain_client import get_chat_model, get_embedding_model
from uipath.llm_client.settings import get_default_client_settings
settings = get_default_client_settings()
# Create a chat model - vendor is auto-detected from model name
chat_model = get_chat_model(model_name="gpt-4o-2024-11-20", client_settings=settings)
response = chat_model.invoke("What is the capital of France?")
print(response.content)
# Create an embeddings model
embeddings_model = get_embedding_model(model_name="text-embedding-3-large", client_settings=settings)
vectors = embeddings_model.embed_documents(["Hello world", "How are you?"])The normalized API provides a consistent interface across all LLM providers:
from uipath_langchain_client import get_chat_model
from uipath.llm_client.settings import get_default_client_settings
settings = get_default_client_settings()
# Use normalized API for provider-agnostic calls
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
client_type="normalized",
)
# Works the same way regardless of the underlying provider
response = chat_model.invoke("Explain quantum computing in simple terms.")
print(response.content)All chat models support streaming for real-time output:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
for chunk in chat_model.stream("Write a short poem about coding."):
print(chunk.content, end="", flush=True)
print()For async/await support:
import asyncio
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
async def main():
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
# Async invoke
response = await chat_model.ainvoke("What is 2 + 2?")
print(response.content)
# Async streaming
async for chunk in chat_model.astream("Tell me a joke."):
print(chunk.content, end="", flush=True)
print()
asyncio.run(main())Use tools with LangChain's standard interface:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny and 72°F."
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
return str(eval(expression))
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
# Bind tools to the model
model_with_tools = chat_model.bind_tools([get_weather, calculate])
# The model can now use tools
response = model_with_tools.invoke("What's the weather in Paris?")
print(response.tool_calls)Integrate with LangChain's agent framework:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Search results for: {query}"
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
agent = create_react_agent(chat_model, [search])
# Run the agent
result = agent.invoke({"messages": [("user", "Search for Python tutorials")]})
print(result["messages"][-1].content)The core uipath_llm_client package provides thin wrappers around native vendor SDKs. These are drop-in replacements that route requests through UiPath's infrastructure while preserving the original SDK's interface:
from uipath.llm_client.clients.openai import UiPathOpenAI, UiPathAzureOpenAI
# Drop-in replacement for openai.OpenAI — routes through UiPath
client = UiPathOpenAI(model_name="gpt-4o-2024-11-20")
response = client.chat.completions.create(
model="gpt-4o-2024-11-20",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Azure OpenAI variant
azure_client = UiPathAzureOpenAI(model_name="gpt-4o-2024-11-20")from uipath.llm_client.clients.anthropic import UiPathAnthropic
# Drop-in replacement for anthropic.Anthropic
client = UiPathAnthropic(model_name="anthropic.claude-sonnet-4-5-20250929-v1:0")
response = client.messages.create(
model="anthropic.claude-sonnet-4-5-20250929-v1:0",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content[0].text)from uipath.llm_client.clients.google import UiPathGoogle
# Drop-in replacement for google.genai.Client
client = UiPathGoogle(model_name="gemini-2.5-flash")
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Hello!",
)
print(response.text)All native SDK wrappers are available in sync and async variants:
| Class | SDK | Description |
|---|---|---|
UiPathOpenAI / UiPathAsyncOpenAI |
openai.OpenAI |
OpenAI models (BYO) |
UiPathAzureOpenAI / UiPathAsyncAzureOpenAI |
openai.AzureOpenAI |
Azure OpenAI models |
UiPathAnthropic / UiPathAsyncAnthropic |
anthropic.Anthropic |
Anthropic models |
UiPathAnthropicBedrock / UiPathAsyncAnthropicBedrock |
anthropic.AnthropicBedrock |
Anthropic via AWS Bedrock |
UiPathAnthropicVertex / UiPathAsyncAnthropicVertex |
anthropic.AnthropicVertex |
Anthropic via Vertex AI |
UiPathAnthropicFoundry / UiPathAsyncAnthropicFoundry |
anthropic.AnthropicFoundry |
Anthropic via Azure Foundry |
UiPathGoogle |
google.genai.Client |
Google Gemini models |
For completely custom HTTP requests, use the low-level HTTPX client directly:
from uipath.llm_client import UiPathHttpxClient
from uipath.llm_client.settings import UiPathAPIConfig, get_default_client_settings
settings = get_default_client_settings()
# Create a low-level HTTP client with UiPath auth and routing
client = UiPathHttpxClient(
base_url=settings.build_base_url(model_name="gpt-4o-2024-11-20"),
auth=settings.build_auth_pipeline(),
headers=settings.build_auth_headers(model_name="gpt-4o-2024-11-20"),
model_name="gpt-4o-2024-11-20",
api_config=UiPathAPIConfig(
api_type="completions",
client_type="passthrough",
vendor_type="openai",
api_flavor="chat-completions",
),
max_retries=2,
)
# Make a raw HTTP request
response = client.post(
"/chat/completions",
json={
"model": "gpt-4o-2024-11-20",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 100,
},
)
response.raise_for_status()
print(response.json())Pass custom settings when you need more control:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath.llm_client.settings import PlatformSettings
from uipath.llm_client.utils.retry import RetryConfig
# Custom settings for Platform (AgentHub/Orchestrator)
settings = PlatformSettings()
# With retry configuration
retry_config: RetryConfig = {
"initial_delay": 2.0,
"max_delay": 60.0,
"exp_base": 2.0,
"jitter": 1.0,
}
chat_model = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
client_settings=settings,
max_retries=3,
retry_config=retry_config,
)from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath.llm_client.settings import get_default_client_settings
# Explicitly specify the backend
agenthub_settings = get_default_client_settings(backend="agenthub")
llmgw_settings = get_default_client_settings(backend="llmgateway")
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20", client_settings=llmgw_settings)
# Or use environment variable (no code changes needed)
# export UIPATH_LLM_SERVICE="llmgateway"You can instantiate LLMGatewaySettings directly for full control over configuration:
With Direct Client Classes:
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
from uipath.llm_client.settings import LLMGatewaySettings
# Create LLMGatewaySettings with explicit configuration
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="my-product",
requesting_feature="my-feature",
client_id="your-client-id",
client_secret="your-client-secret",
user_id="optional-user-id", # Optional: for tracking
)
# Use with OpenAI/Azure chat model
openai_chat = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
settings=settings,
)
response = openai_chat.invoke("Hello!")
print(response.content)
# Use with Google Gemini
gemini_chat = UiPathChatGoogleGenerativeAI(
model="gemini-2.5-flash",
settings=settings,
)
response = gemini_chat.invoke("Hello!")
print(response.content)
# Use with embeddings
embeddings = UiPathAzureOpenAIEmbeddings(
model="text-embedding-3-large",
settings=settings,
)
vectors = embeddings.embed_documents(["Hello world"])With Factory Methods:
from uipath_langchain_client import get_chat_model, get_embedding_model
from uipath.llm_client.settings import LLMGatewaySettings
# Create LLMGatewaySettings
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="my-product",
requesting_feature="my-feature",
client_id="your-client-id",
client_secret="your-client-secret",
)
# Factory auto-detects vendor from model name
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
)
response = chat_model.invoke("What is the capital of France?")
print(response.content)
# Use normalized API for provider-agnostic interface
normalized_chat = get_chat_model(
model_name="gemini-2.5-flash",
client_settings=settings,
client_type="normalized",
)
response = normalized_chat.invoke("Explain quantum computing.")
print(response.content)
# Embeddings with factory
embeddings = get_embedding_model(
model_name="text-embedding-3-large",
client_settings=settings,
)
vectors = embeddings.embed_documents(["Hello", "World"])If you have enrolled your own model deployment into UiPath's LLMGateway, you can use it by providing your BYO connection ID. This allows you to route requests through LLMGateway to your custom-enrolled models.
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
# Use your BYO connection ID from LLMGateway enrollment
chat = UiPathAzureChatOpenAI(
model="your-custom-model-name",
byo_connection_id="your-byo-connection-id", # UUID from LLMGateway enrollment
)
response = chat.invoke("Hello from my custom model!")
print(response.content)This works with any client class:
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
# BYO chat model
byo_chat = UiPathChatGoogleGenerativeAI(
model="my-custom-gemini",
byo_connection_id="f1d29b49-0c7b-4c01-8bc4-fc1b7d918a87",
)
# BYO embeddings model
byo_embeddings = UiPathAzureOpenAIEmbeddings(
model="my-custom-embeddings",
byo_connection_id="a2e38c51-1d8a-5e02-9cd5-ge2c8e029b98",
)The client provides a hierarchy of typed exceptions for handling API errors. All exceptions extend UiPathAPIError (which extends httpx.HTTPStatusError):
from uipath.llm_client import (
UiPathAPIError,
UiPathAuthenticationError,
UiPathRateLimitError,
UiPathNotFoundError,
)
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
try:
response = chat.invoke("Hello!")
except UiPathRateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after} seconds")
except UiPathAuthenticationError:
print("Authentication failed — check your credentials")
except UiPathAPIError as e:
print(f"API error {e.status_code}: {e.message}")| Exception | HTTP Status | Description |
|---|---|---|
UiPathAPIError |
Any | Base exception for all UiPath API errors |
UiPathBadRequestError |
400 | Invalid request parameters |
UiPathAuthenticationError |
401 | Invalid or expired credentials |
UiPathPermissionDeniedError |
403 | Insufficient permissions |
UiPathNotFoundError |
404 | Model or resource not found |
UiPathConflictError |
409 | Request conflicts with current state |
UiPathRequestTooLargeError |
413 | Request payload too large |
UiPathUnprocessableEntityError |
422 | Request is well-formed but semantically invalid |
UiPathRateLimitError |
429 | Rate limit exceeded (has retry_after property) |
UiPathInternalServerError |
500 | Server-side error |
UiPathServiceUnavailableError |
503 | Service temporarily unavailable |
UiPathGatewayTimeoutError |
504 | Gateway timeout |
UiPathTooManyRequestsError |
529 | Anthropic overload (too many requests) |
The UiPathAPIConfig class controls how requests are routed through UiPath's infrastructure:
from uipath.llm_client.settings import UiPathAPIConfig
config = UiPathAPIConfig(
api_type="completions",
client_type="passthrough",
vendor_type="openai",
api_flavor="chat-completions",
api_version="2025-03-01-preview",
)| Field | Type | Default | Description |
|---|---|---|---|
api_type |
"completions" | "embeddings" | None |
None |
Type of API call |
client_type |
"passthrough" | "normalized" | None |
None |
"passthrough" uses vendor-native APIs; "normalized" uses UiPath's unified API |
vendor_type |
str | None |
None |
LLM vendor identifier: "openai", "vertexai", "awsbedrock", "anthropic", "azure" |
api_flavor |
str | None |
None |
Vendor-specific API flavor (e.g., "chat-completions", "responses", "generate-content", "converse", "invoke", "anthropic-claude") |
api_version |
str | None |
None |
Vendor-specific API version (e.g., "2025-03-01-preview", "v1beta1") |
freeze_base_url |
bool |
False |
Prevents httpx from modifying the base URL (required for some vendor SDKs) |
The client supports custom SSL/TLS configuration through environment variables:
| Environment Variable | Description |
|---|---|
UIPATH_DISABLE_SSL_VERIFY |
Set to "1", "true", "yes", or "on" to disable SSL verification (not recommended for production) |
SSL_CERT_FILE |
Path to a custom SSL certificate file |
REQUESTS_CA_BUNDLE |
Path to a custom CA bundle file |
SSL_CERT_DIR |
Path to a directory containing SSL certificate files |
By default, the client uses truststore (if available) or falls back to certifi for SSL certificate verification.
Enable request/response logging by passing a logger instance:
import logging
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("uipath_llm")
chat = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
logger=logger, # Enables request/response logging with timing
)
response = chat.invoke("Hello!")The logger will record:
- Request start time and URL
- Response duration (in milliseconds)
- Error responses with status codes and body content
All requests automatically include the following default headers:
| Header | Value | Description |
|---|---|---|
X-UiPath-LLMGateway-TimeoutSeconds |
295 |
Server-side timeout for LLM Gateway |
X-UiPath-LLMGateway-AllowFull4xxResponse |
true |
Returns full error response bodies for 4xx errors |
Both AgentHub and LLMGateway authentication pipelines automatically handle token expiry:
- When a request receives a 401 Unauthorized response, the auth pipeline refreshes the token and retries the request
- Token refresh is handled transparently — no user intervention required
- Auth instances use the singleton pattern to reuse tokens across multiple client instances
# Clone and install with dev dependencies
git clone https://github.com/UiPath/uipath-llm-client.git
cd uipath-llm-client
uv sync
# Run tests
uv run pytest
# Format and lint
uv run ruff format .
uv run ruff check .
uv run pyrightTests use VCR.py to record and replay HTTP interactions. Cassettes (recorded responses) are stored in tests/cassettes/ using Git LFS.
Important: Tests must pass locally before submitting a PR. The CI pipeline does not make any real API requests—it only runs tests using the pre-recorded cassettes.
Prerequisites:
- Install Git LFS:
brew install git-lfs(macOS) orapt install git-lfs(Ubuntu) - Initialize Git LFS:
git lfs install - Pull cassettes:
git lfs pull
Running tests locally:
# Run all tests using cassettes (no API credentials required)
uv run pytest
# Run specific test files
uv run pytest tests/langchain/
uv run pytest tests/core/Updating cassettes:
When adding new tests or modifying existing ones that require new API interactions:
- Set up your environment with valid credentials (see Configuration)
- Run the tests—VCR will record new interactions automatically
- Commit the updated cassettes along with your code changes
Note: The CI pipeline validates that all tests pass using the committed cassettes. If your tests require new API calls, you must record and commit the corresponding cassettes for the pipeline to pass.
uipath-llm-client/
├── src/uipath/llm_client/ # Core package
│ ├── httpx_client.py # UiPathHttpxClient / UiPathHttpxAsyncClient
│ ├── clients/ # Native SDK wrappers
│ │ ├── openai/ # UiPathOpenAI, UiPathAzureOpenAI, etc.
│ │ ├── anthropic/ # UiPathAnthropic, UiPathAnthropicBedrock, etc.
│ │ └── google/ # UiPathGoogle
│ ├── settings/ # Backend-specific settings & auth
│ │ ├── base.py # UiPathBaseSettings, UiPathAPIConfig
│ │ ├── platform/ # PlatformSettings, PlatformAuth
│ │ └── llmgateway/ # LLMGatewaySettings, LLMGatewayS2SAuth
│ └── utils/ # Exceptions, retry, logging, SSL
│ ├── exceptions.py # UiPathAPIError hierarchy (12 classes)
│ ├── retry.py # RetryConfig, RetryableHTTPTransport
│ ├── logging.py # LoggingConfig
│ └── ssl_config.py # SSL/TLS configuration
├── packages/
│ ├── uipath_langchain_client/ # LangChain integration
│ │ └── src/uipath_langchain_client/
│ │ ├── base_client.py # UiPathBaseLLMClient mixin
│ │ ├── factory.py # get_chat_model(), get_embedding_model()
│ │ └── clients/
│ │ ├── normalized/ # UiPathChat, UiPathEmbeddings
│ │ ├── openai/ # UiPathAzureChatOpenAI, UiPathChatOpenAI, etc.
│ │ ├── google/ # UiPathChatGoogleGenerativeAI, etc.
│ │ ├── anthropic/ # UiPathChatAnthropic
│ │ ├── vertexai/ # UiPathChatAnthropicVertex
│ │ ├── bedrock/ # UiPathChatBedrock, UiPathChatBedrockConverse
│ │ ├── fireworks/ # UiPathChatFireworks, UiPathFireworksEmbeddings
│ │ └── azure/ # UiPathAzureAIChatCompletionsModel
│ └── uipath_llamaindex_client/ # LlamaIndex integration (planned)
└── tests/ # Test suite with VCR cassettes
This project is licensed under the MIT License. See the LICENSE file for details.
For any questions or issues, please contact the maintainers at UiPath GitHub Repository.