# Supported Models

The InteractiveAI Router provides access to over 200 language models across 50+ providers through a single, unified API endpoint. Rather than maintaining a static list that risks becoming outdated, the platform offers two always-current ways to explore the full model catalog: a programmatic API endpoint and an interactive UI page.

### Model Identifier Format

Every model in the Router follows a consistent naming convention:

```
{provider}/{model-name}
```

For example:

* `anthropic/claude-sonnet-4` — Anthropic's Claude Sonnet 4
* `openai/gpt-4o` — OpenAI's GPT-4o
* `google/gemini-2.5-flash` — Google's Gemini 2.5 Flash
* `deepseek/deepseek-v3.2-speciale` — DeepSeek V3.2 Speciale
* `mistralai/mistral-large` — Mistral's Large model

Use this identifier as the `model` parameter in your API requests. When the `model` parameter is omitted, the Router uses the default configured for your project.

### Browsing Models via the Platform UI

Navigate to **Orchestration → LLMs** in the sidebar to access the Router LLMs page:

```
https://app.interactive.ai/project/{your-project-id}/router-llms
```

The model catalog displays a searchable, sortable table where each row represents an available model. The table includes the following information for every model:

| Column       | Description                                                                      |
| ------------ | -------------------------------------------------------------------------------- |
| Model Name   | Human-readable name (e.g., "Anthropic: Claude 3.5 Sonnet")                       |
| Model ID     | The identifier used in API calls (e.g., `interactive/anthropic/claude-sonnet-4`) |
| Provider     | The upstream provider (Anthropic, OpenAI, Google, DeepSeek, etc.)                |
| Best for     | Brief description of recommended use cases and model strengths                   |
| Capabilities | Supported modalities such as Text, Vision, Tools, Files, and Video               |
| Context      | Maximum context window size in tokens (e.g., 200k, 1M)                           |
| Cost         | Number of pricing rules configured. Click to see the full pricing breakdown      |

Click any model row to open its detail view, which displays complete pricing per million tokens, the match pattern used for trace identification, and a filterable history of all generations that used that model.

{% hint style="info" %}
Use the search bar and column sorting to quickly filter models by provider, capability, or context window size. This is the fastest way to compare models for a specific use case.
{% endhint %}

### Retrieving Models via API

For programmatic access, query the models endpoint to retrieve the full, up-to-date catalog:

```
GET https://api.interactive.ai/api/v1/models
```

The response returns a JSON array where each model object includes:

```json
{
  "id": "interactive/anthropic/claude-sonnet-4",
  "object": "model",
  "created": 1770311934,
  "provider": "Anthropic",
  "marketing_name": "Anthropic: Claude Sonnet 4",
  "description": "Claude Sonnet 4 is Anthropic's...",
  "prices": {
    "input": "0.000003",
    "output": "0.000015",
    "output_reasoning": "0.000015"
  }
}
```

| Field            | Description                                                        |
| ---------------- | ------------------------------------------------------------------ |
| `id`             | The model identifier to use in the `model` parameter of API calls  |
| `provider`       | The upstream provider name                                         |
| `marketing_name` | Human-readable display name                                        |
| `description`    | Detailed description of the model's architecture and strengths     |
| `prices`         | Token pricing broken down by usage type (input, output, reasoning) |

This endpoint requires no authentication and returns the complete catalog in a single request.

### Supported Providers

The Router integrates with a wide range of providers, including but not limited to:

| Provider            | Example Models                   | Capabilities        |
| ------------------- | -------------------------------- | ------------------- |
| Anthropic           | Claude Sonnet 4, Claude Haiku    | Text, Vision        |
| OpenAI              | GPT-4o, GPT-4o Mini, o1          | Text, Vision, Tools |
| Google              | Gemini 2.5 Flash, Gemini 2.5 Pro | Text, Vision        |
| DeepSeek            | DeepSeek V3.2, DeepSeek R1       | Text                |
| Mistral             | Mistral Large, Mistral Small     | Text, Tools         |
| Meta (via partners) | Llama 3.1, Llama 3.3             | Text                |
| AI21                | Jamba Large 1.7, Jamba Mini 1.7  | Text                |
| Aion Labs           | Aion-1.0, Aion-1.0-Mini          | Text                |
| AllenAI             | OLMo 3, Molmo2                   | Text, Vision        |

New providers and models are added regularly.&#x20;

### Choosing the Right Model

When selecting a model, consider these factors:

**Task complexity vs. cost:** Larger models like Claude Sonnet 4 or GPT-4o deliver stronger reasoning and instruction-following but cost more per token. For simpler tasks like classification or extraction, smaller models such as GPT-4o Mini or Mistral Small offer significant cost savings with adequate performance.

**Context window requirements:** If your application processes long documents or maintains extended conversation histories, prioritize models with larger context windows. Models range from 4K to over 1M tokens depending on the provider.

**Capability requirements:** Not all models support every modality. If your application requires vision (image understanding), tool use (function calling), or file processing, filter models by the Capabilities column in the Router LLMs page to ensure compatibility.

**Latency sensitivity:** Smaller models generally respond faster. For real-time applications where response speed matters, benchmark latency using the model's generation history available in each model's detail view on the platform.
