Skip to content

Adapters API Reference#

LLM provider adapters for automatic observability.

AdapterConfig#

AdapterConfig dataclass #

AdapterConfig(
    log_requests: bool = True,
    log_tool_calls: bool = True,
    log_streams: bool = True,
    log_stream_chunks: bool = False,
    chunk_log_interval: int = 10,
    on_error: Optional[
        Callable[[Exception, dict[str, Any]], None]
    ] = None,
    on_stream_start: Optional[
        Callable[[str, str, str], None]
    ] = None,
    on_token: Optional[Callable[[str, str], None]] = None,
    on_stream_end: Optional[
        Callable[[str, str, int], None]
    ] = None,
    on_stream_error: Optional[
        Callable[[Exception, str], None]
    ] = None,
    metadata: dict[str, Any] = dict(),
)

Configuration for provider adapters.

Attributes:

Name Type Description
log_requests bool

Whether to log LLM request events. Default True.

log_tool_calls bool

Whether to log tool call events. Default True.

log_streams bool

Whether to log streaming events. Default True.

log_stream_chunks bool

Whether to log individual stream chunks. Default False to reduce noise.

chunk_log_interval int

If log_stream_chunks is True, log every Nth chunk. Default 10.

on_error Optional[Callable[[Exception, dict[str, Any]], None]]

Optional callback for adapter errors. Signature: (error: Exception, context: dict) -> None

on_stream_start Optional[Callable[[str, str, str], None]]

Optional callback invoked when a stream begins. Signature: (stream_id: str, model: str, provider: str) -> None

on_token Optional[Callable[[str, str], None]]

Optional callback invoked for each content token during streaming. Signature: (token: str, stream_id: str) -> None

on_stream_end Optional[Callable[[str, str, int], None]]

Optional callback invoked when a stream completes. Signature: (stream_id: str, content: str, chunks: int) -> None

on_stream_error Optional[Callable[[Exception, str], None]]

Optional callback invoked when a stream fails. Signature: (error: Exception, stream_id: str) -> None

BaseAdapter#

BaseAdapter #

BaseAdapter(
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Base class for provider adapters.

Subclasses implement provider-specific wrapping logic while this base class provides common utilities and configuration handling.

Initialize the adapter.

Parameters:

Name Type Description Default
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration. Uses defaults if not provided.

None

client property #

client: OpenIntentClient

The OpenIntent client.

config property #

config: AdapterConfig

The adapter configuration.

intent_id property #

intent_id: str

The intent ID for event logging.

OpenAI Adapter#

OpenAIAdapter #

OpenAIAdapter(
    openai_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the OpenAI Python client.

Wraps an OpenAI client instance to automatically log OpenIntent events for all chat completions, tool calls, and streaming responses.

The adapter exposes the same interface as the OpenAI client, so you can use it as a drop-in replacement:

adapter = OpenAIAdapter(openai_client, openintent, intent_id)
response = adapter.chat.completions.create(...)

Events logged: - LLM_REQUEST_STARTED: When a completion request begins - LLM_REQUEST_COMPLETED: When a completion finishes successfully - LLM_REQUEST_FAILED: When a completion fails - TOOL_CALL_STARTED: When the model calls a tool - TOOL_CALL_COMPLETED: When tool execution completes (if tracked) - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Initialize the OpenAI adapter.

Parameters:

Name Type Description Default
openai_client Any

The OpenAI client instance to wrap.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the openai package is not installed.

Anthropic Adapter#

AnthropicAdapter #

AnthropicAdapter(
    anthropic_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the Anthropic Python client.

Wraps an Anthropic client instance to automatically log OpenIntent events for all message creations, tool use, and streaming responses.

The adapter exposes the same interface as the Anthropic client, so you can use it as a drop-in replacement:

adapter = AnthropicAdapter(anthropic_client, openintent, intent_id)
message = adapter.messages.create(...)

Events logged: - LLM_REQUEST_STARTED: When a message request begins - LLM_REQUEST_COMPLETED: When a message finishes successfully - LLM_REQUEST_FAILED: When a message request fails - TOOL_CALL_STARTED: When the model uses a tool - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Initialize the Anthropic adapter.

Parameters:

Name Type Description Default
anthropic_client Any

The Anthropic client instance to wrap.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the anthropic package is not installed.

Azure OpenAI Adapter#

AzureOpenAIAdapter #

AzureOpenAIAdapter(
    azure_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the Azure OpenAI Python client.

Wraps an AzureOpenAI client instance to automatically log OpenIntent events for all chat completions, tool calls, and streaming responses.

The adapter exposes the same interface as the OpenAI client, so you can use it as a drop-in replacement:

adapter = AzureOpenAIAdapter(azure_client, openintent, intent_id)
response = adapter.chat.completions.create(...)

Events logged: - LLM_REQUEST_STARTED: When a completion request begins - LLM_REQUEST_COMPLETED: When a completion finishes successfully - LLM_REQUEST_FAILED: When a completion fails - TOOL_CALL_STARTED: When the model calls a tool - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Streaming hooks (on_stream_start, on_token, on_stream_end, on_stream_error) are fully supported via AdapterConfig.

Initialize the Azure OpenAI adapter.

Parameters:

Name Type Description Default
azure_client Any

The AzureOpenAI client instance to wrap.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the openai package is not installed.

OpenRouter Adapter#

OpenRouterAdapter #

OpenRouterAdapter(
    openrouter_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the OpenRouter API (OpenAI-compatible client).

Wraps an OpenAI client configured for OpenRouter's API to automatically log OpenIntent events for all chat completions, tool calls, and streaming.

OpenRouter provides access to 200+ models from multiple providers through a single unified API. The adapter tracks which underlying model is used.

The adapter exposes the same interface as the OpenAI client:

adapter = OpenRouterAdapter(openrouter_client, openintent, intent_id)
response = adapter.chat.completions.create(...)

Events logged: - LLM_REQUEST_STARTED: When a completion request begins - LLM_REQUEST_COMPLETED: When a completion finishes successfully - LLM_REQUEST_FAILED: When a completion fails - TOOL_CALL_STARTED: When the model calls a tool - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Streaming hooks (on_stream_start, on_token, on_stream_end, on_stream_error) are fully supported via AdapterConfig.

Initialize the OpenRouter adapter.

Parameters:

Name Type Description Default
openrouter_client Any

The OpenAI client configured for OpenRouter's API.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the openai package is not installed.

Gemini Adapter#

GeminiAdapter #

GeminiAdapter(
    gemini_model: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the Google Generative AI (Gemini) Python client.

Wraps a GenerativeModel instance to automatically log OpenIntent events for all content generation, tool calls, and streaming responses.

The adapter exposes the same interface as the GenerativeModel, so you can use it as a drop-in replacement:

adapter = GeminiAdapter(model, openintent, intent_id)
response = adapter.generate_content("Hello")

Events logged: - LLM_REQUEST_STARTED: When a generation request begins - LLM_REQUEST_COMPLETED: When generation finishes successfully - LLM_REQUEST_FAILED: When generation fails - TOOL_CALL_STARTED: When the model calls a function - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Initialize the Gemini adapter.

Parameters:

Name Type Description Default
gemini_model Any

The GenerativeModel instance to wrap.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the google-generativeai package is not installed.

model property #

model: Any

The wrapped GenerativeModel.

generate_content #

generate_content(
    contents: Any, *, stream: bool = False, **kwargs: Any
) -> Any

Generate content with automatic event logging.

Parameters:

Name Type Description Default
contents Any

The prompt or conversation contents.

required
stream bool

Whether to stream the response.

False
**kwargs Any

Additional arguments passed to generate_content.

{}

Returns:

Type Description
Any

GenerateContentResponse or iterator of chunks if streaming.

start_chat #

start_chat(**kwargs: Any) -> GeminiChatSession

Start a chat session with automatic event logging.

Returns a wrapped chat session that logs events for each message.

Grok Adapter#

GrokAdapter #

GrokAdapter(
    grok_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the xAI Grok API (OpenAI-compatible client).

Wraps an OpenAI client configured for xAI's API to automatically log OpenIntent events for all chat completions, tool calls, and streaming.

The adapter exposes the same interface as the OpenAI client:

adapter = GrokAdapter(grok_client, openintent, intent_id)
response = adapter.chat.completions.create(...)

Events logged: - LLM_REQUEST_STARTED: When a completion request begins - LLM_REQUEST_COMPLETED: When a completion finishes successfully - LLM_REQUEST_FAILED: When a completion fails - TOOL_CALL_STARTED: When the model calls a tool - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Initialize the Grok adapter.

Parameters:

Name Type Description Default
grok_client Any

The OpenAI client configured for xAI's API.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the openai package is not installed.

grok property #

grok: Any

The wrapped Grok client.

DeepSeek Adapter#

DeepSeekAdapter #

DeepSeekAdapter(
    deepseek_client: Any,
    openintent_client: OpenIntentClient,
    intent_id: str,
    config: Optional[AdapterConfig] = None,
)

Bases: BaseAdapter

Adapter for the DeepSeek API (OpenAI-compatible client).

Wraps an OpenAI client configured for DeepSeek's API to automatically log OpenIntent events for all chat completions, tool calls, and streaming.

The adapter exposes the same interface as the OpenAI client:

adapter = DeepSeekAdapter(deepseek_client, openintent, intent_id)
response = adapter.chat.completions.create(...)

Events logged: - LLM_REQUEST_STARTED: When a completion request begins - LLM_REQUEST_COMPLETED: When a completion finishes successfully - LLM_REQUEST_FAILED: When a completion fails - TOOL_CALL_STARTED: When the model calls a tool - STREAM_STARTED: When a streaming response begins - STREAM_CHUNK: Periodically during streaming (if configured) - STREAM_COMPLETED: When streaming finishes - STREAM_CANCELLED: If streaming is interrupted

Initialize the DeepSeek adapter.

Parameters:

Name Type Description Default
deepseek_client Any

The OpenAI client configured for DeepSeek's API.

required
openintent_client OpenIntentClient

The OpenIntent client for logging events.

required
intent_id str

The intent ID to associate events with.

required
config Optional[AdapterConfig]

Optional adapter configuration.

None

Raises:

Type Description
ImportError

If the openai package is not installed.

deepseek property #

deepseek: Any

The wrapped DeepSeek client.