Router LLMs

Router LLMs displays the catalog of pre-configured models available through the InteractiveAI Router. These model definitions are managed by InteractiveAI and include pricing, capabilities, and context window information for hundreds of models across providers like Anthropic, OpenAI, Google, Tongyi, and many others.

When you route requests through the InteractiveAI Router, cost tracking happens automatically using these definitions. You don't need to configure pricing or tokenization because the platform handles it for you. This page serves as a reference for exploring available models, comparing their capabilities, and understanding the pricing you'll incur when using each one.

circle-info

To know more about the InteractiveAI Router please refer to the LLM Router Documentation.

Why Router LLMs Matter

The InteractiveAI Router consolidates model access through a unified endpoint, eliminating the need to maintain parallel integrations. The Router provides:

  • Centralized cost tracking: All usage flows through a single API, consolidating spend visibility across providers without reconciling multiple invoices or dashboards.

  • Native platform integration: The Router operates as part of the InteractiveAI infrastructure. Generate a Router API Key and requests automatically capture traces, costs, and observability data with no additional instrumentation required.

  • Provider-agnostic flexibility: Access over 200 open-source and proprietary models from providers including Anthropic, OpenAI, Google, Mistral, and DeepSeek among many more. Swap models or configure fallbacks without changing application code.

  • Built-in resilience: Configure automatic failover to backup providers when primary models become unavailable, ensuring continuous operation during provider outages. For detailed configuration of fallback behavior and load balancing, see Load Balancing & Model Fallbackarrow-up-right.


Browsing the Model Catalog

The main view displays all available models in a searchable, sortable table. Each row represents a model you can access through the Router.

Column
Description

Model ID

The identifier used for routing requests (e.g., interactive/anthropic/claude-3-opus). Use this value in your API calls

Cost

Number of pricing rules configured (e.g., "5 prices set"). Click to see the full pricing breakdown

Model Name

Human-readable name (e.g., "Anthropic: Claude 3.5 Sonnet")

Provider

The upstream provider (Anthropic, OpenAI, Google, DeepSeek, etc.)

Best for

Brief description of recommended use cases and model strengths

Capabilities

Supported modalities such as Text, Vision, Files, etc

Context

Maximum context window size in tokens (e.g., 200k, 1M)


Model Detail View

Click any model row to open its detail view. This page provides complete information about the model's configuration and usage history.

Model Configuration

The upper-left panel displays technical details for this model. The Match Pattern shows the regex used to identify this model in your traces. All Router LLM definitions are maintained by InteractiveAI, so pricing and configuration updates happen automatically as providers change their rates.

Pricing

The upper-right panel shows the cost structure for this model. Prices display per 1 million units for each usage type, typically input and output tokens. Some models include additional tiers for cached tokens or reasoning tokens depending on how the provider structures billing.

Model Observations

The bottom section displays a filterable history of all generations that used this model. You can search by ID, name, or trace, filter by time range and environment, and customize visible columns. This helps you understand how a specific model is being used across your project and identify patterns in latency, cost, or token consumption.


Router LLMs vs Custom LLMs

InteractiveAI provides two ways of working with models:

Feature
Router LLMs
Custom LLMs

Purpose

Models accessed through the InteractiveAI Router

Models accessed through your own API keys

Pricing

Pre-configured and maintained by InteractiveAI

You define pricing manually

Management

Automatic updates as providers change rates

You maintain definitions yourself

Location

Orchestration → LLMs

Settings → Custom LLMs

Use Router LLMs when you want a managed experience with automatic cost tracking. Use Custom LLMs when you're connecting directly to providers with your own credentials and need to define pricing for accurate cost reporting.

Last updated

Was this helpful?