# Custom LLMs

Custom LLMs lets you **connect** your own language model providers to InteractiveAI. Once connected, these models become available in the **Playground** for prompt testing and in **Evaluators** for automated quality assessment. Your provider charges you directly based on usage, while InteractiveAI simply routes requests through your credentials.

This section serves two purposes. **Connections** store your API keys for providers like OpenAI, Anthropic, and Google AI Studio. **Configurations** define pricing and tokenization settings so InteractiveAI can accurately calculate costs when you use models through direct API integrations rather than the InteractiveAI Router.

***

### Connections

Connections store the **credentials** InteractiveAI uses to communicate with external LLM providers. Each connection links a provider name to an API key and endpoint configuration.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FawMl2lQ3J7gjne03yVmy%2FScreenshot%202026-03-11%20at%2012.37.34.png?alt=media&#x26;token=6d0460ce-ce33-4672-92aa-d19c42f648bb" alt=""><figcaption></figcaption></figure></div>

#### Viewing Connections

The Connections tab displays all configured providers in a table showing:

| Column       | Description                                                    |
| ------------ | -------------------------------------------------------------- |
| **Provider** | Display name you assigned to this connection                   |
| **Adapter**  | The provider type (e.g., `openai`, `google-ai-studio`)         |
| **Base URL** | API endpoint (`default` uses the provider's standard endpoint) |
| **API Key**  | Masked key showing the last few characters                     |

#### Adding a Connection

Click the **+** button in the top-right corner to open the Add LLM Connection modal.

| Field             | Description                                                                                                               |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------- |
| **Provider name** | A display name to identify this connection within InteractiveAI (e.g., "OpenAI Production", "Gemini")                     |
| **LLM adapter**   | The provider type that determines the API schema. Options include `openai`, `google-ai-studio`, `anthropic`, and others   |
| **API Base URL**  | Leave as `default` to use the provider's standard endpoint, or enter a custom URL for self-hosted or proxy configurations |
| **API Key**       | Your provider's API key. Stored encrypted in the database                                                                 |
| **Extra Headers** | Optional HTTP headers to include with requests (also stored encrypted)                                                    |

{% hint style="info" %}
You can create multiple connections for the same provider. This is useful for separating production and development keys, or for connecting to different accounts.
{% endhint %}

***

### Configurations

Configurations define model metadata for **cost tracking** and **tokenization**. When you use models through direct API integrations (not through the InteractiveAI Router), these definitions tell the platform how to calculate costs based on token usage.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FB67fHLECnLCflnNFn1gb%2Fimage.png?alt=media&#x26;token=dcd25ea6-5f5c-4cc5-a85d-cc2ae0671964" alt=""><figcaption></figcaption></figure></div>

#### Viewing Configurations

The Configurations tab displays all model definitions in a table showing:

| Column              | Description                                                |
| ------------------- | ---------------------------------------------------------- |
| **Model Name**      | Identifier for the model (e.g., `gemini-2.5-pro`, `gpt-5`) |
| **Prices per unit** | Number of pricing rules configured                         |
| **Provider**        | Who maintains this definition (User or System)             |
| **Match Pattern**   | Regex pattern used to identify this model in traces        |
| **Tokenizer**       | Tokenization method for counting tokens                    |
| **Created**         | When the definition was added                              |
| **Last Used**       | Most recent usage of this model                            |

#### Adding a Model Definition

Click **Add Model Definition** to open the configuration modal.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FA96viYhWmb8lelY1VXtx%2Fimage.png?alt=media&#x26;token=de4f542e-f348-4414-ad0c-d45ad322b761" alt=""><figcaption></figcaption></figure></div>

**Model Details**

| Field             | Description                                                                                                         |
| ----------------- | ------------------------------------------------------------------------------------------------------------------- |
| **Model Name**    | The model identifier as it appears in API calls (e.g., `gpt-4-turbo`, `claude-3-opus`)                              |
| **Match Pattern** | A regex pattern to match this model in your traces. For example, `(?i)^(gpt-5)$` matches "gpt-5" case-insensitively |
| **Tokenizer**     | The tokenization method used to count tokens. Select the appropriate tokenizer for accurate cost calculation        |

**Prices**

Set prices per usage type. Usage types must exactly match the keys in your ingested usage details.

* For OpenAI and compatible providers, typical usage types are:
  * `input` — Price per input token
  * `output` — Price per output token&#x20;
* For Anthropic models, you may also configure:
  * `input` — Price per input token
  * `output` — Price per output token
  * `cache_read` — Price per cached input token

Click **+ Add Price** to add additional usage types as needed.

**Price Preview**

The modal displays a live preview showing your configured prices at different scales:

| Usage Type | Per Unit  | Per 1K | Per 1M |
| ---------- | --------- | ------ | ------ |
| input      | $0.000001 | $0.001 | $1     |
| output     | $0.000002 | $0.002 | $2     |

This helps you verify that pricing is configured correctly before saving.

***

### When to Use Custom LLMs

Custom LLMs are essential when you want to:

* **Use the Playground:** Test prompts interactively against your preferred models
* **Run Evaluators:** Power LLM-as-a-Judge evaluations with your own model credentials
* **Track costs accurately:** Define pricing for models used through direct integrations so dashboards reflect actual spending

{% hint style="warning" %}
If you're using the **InteractiveAI Router** for model access, you don't need to configure Custom LLMs, the Router handles provider connections and cost tracking automatically.
{% endhint %}
