# InteractiveAI Router

The InteractiveAI Router provides a unified API for accessing language models across multiple providers. Instead of integrating each provider separately, you can route all your LLM calls through a single endpoint and capture traces automatically.

### Why Use the Router?

* **Single integration**: One setup works for OpenAI, Anthropic, Google, Mistral, DeepSeek, and 50+ other providers
* **Automatic fallbacks**: If a provider fails, requests route to alternatives automatically
* **Unified tracing**: All calls flow through InteractiveAI regardless of the underlying model
* **OpenAI-compatible**: Uses the standard OpenAI SDK format, so existing code works with minimal changes

### Prerequisites

* InteractiveAI account with API credentials
* LLM Router API key

{% hint style="info" %}
You can get you LLM Router API Key on the InteractiveAI Platform on **Settings > API Keys > Router API Keys**.
{% endhint %}

***

### Installation

```bash
pip install interactiveai openai
```

***

### Configuration

Set your API credentials as environment variables:

```python
import os

# InteractiveAI credentials
os.environ["INTERACTIVEAI_PUBLIC_KEY"] = "pk-..."
os.environ["INTERACTIVEAI_SECRET_KEY"] = "sk-..."

# InteractiveAI LLM Router API key
os.environ["LLMROUTER_API_KEY"] = "sk-or-..."
```

***

### Basic Usage

Point the OpenAI client to the Router endpoint. Every call is traced automatically:

<pre class="language-python"><code class="lang-python">from interactiveai import Interactive
from interactiveai.openai import OpenAI

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
    host=os.environ.get("HOST", "https://dev.interactive.ai")
)

client = OpenAI(
    base_url="https://dev.interactive.ai/api/v1",
    api_key=os.environ.get("LLMROUTER_API_KEY"),
)

# Use any supported model by changing the model string
response = client.chat.completions.create(
<strong>    model="anthropic/claude-sonnet-4",
</strong>    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain containerization in one paragraph."}
    ],
)

print(response.choices[0].message.content)
</code></pre>

***

### Switching Providers

Change the `model` parameter to switch providers instantly. No other code changes required:

```python
# Anthropic
model = "anthropic/claude-sonnet-4"

# OpenAI
model = "openai/gpt-4o"

# Google
model = "google/gemini-2.5-flash"

# Mistral
model = "mistralai/mistral-large"

# DeepSeek
model = "deepseek/deepseek-v3.2-speciale"
```

{% hint style="info" %}
For a comprehensive list of supported models, see the [Supported Models](https://app.gitbook.com/o/4wvvENzpjQET3VBQQN8K/s/USnAYIls8STzxCo7KIao/api-guides/supported-models) page page in the InteractiveAI LLM Router documentation.
{% endhint %}

***

### Streaming

Enable streaming by setting `stream=True`:

```python
stream = client.chat.completions.create(
    model="nvidia/nemotron-nano-9b-v2",
    messages=[
        {"role": "user", "content": "Write a haiku about distributed systems."}
    ],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
```

***

### Enriching Traces with Context

Attach custom identifiers and metadata to your traces:

<pre class="language-python"><code class="lang-python">from interactiveai import Interactive
from interactiveai.openai import OpenAI

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)

openai_client = OpenAI(
    base_url="https://app.interactive.ai/api/v1",
    api_key=os.environ.get("LLMROUTER_API_KEY"),
)

user_question = "What are the benefits of event-driven architecture?"

<strong>with interactiveai.start_as_current_span(name="architecture-question") as span:
</strong><strong>    interactiveai.update_current_trace(
</strong><strong>        user_id="dev_router",
</strong><strong>        session_id="session_router",
</strong><strong>        tags=["router", "architecture"],
</strong><strong>        metadata={"source": "documentation", "model_type": "chat"}
</strong><strong>    )
</strong>
    response = openai_client.chat.completions.create(
        model="openai/gpt-4o",
        messages=[
            {"role": "user", "content": user_question}
        ],
    )

    interactiveai.update_current_trace(
        input=user_question,
        output=response.choices[0].message.content
    )

interactiveai.flush()
</code></pre>

***

For advanced features like model fallbacks, limits, embeddings, detailed API reference and more, see the [InteractiveAI LLM Router Documentation](https://app.gitbook.com/o/4wvvENzpjQET3VBQQN8K/s/USnAYIls8STzxCo7KIao/).
