# Errors and Debugging

### Error Response Structure

The InteractiveAI Router returns errors in a consistent JSON format:

```typescript
type ErrorResponse = {
  error: {
    code: number;
    message: string;
    metadata?: Record<string, unknown>;
  };
};
```

The HTTP status code matches `error.code` when the error stems from:

* An invalid request
* Insufficient credits on your API key or account

Otherwise, the HTTP status is `200 OK`, and any error during generation appears in the response body or as an SSE data event.

#### Handling Errors in Code

{% code overflow="wrap" %}

```typescript
const request = await fetch('https://app.interactive.ai/...');
console.log(request.status); // Will be an error code unless the model started processing your request
const response = await request.json();
console.error(response.error?.status); // Will be an error code
console.error(response.error?.message);
```

{% endcode %}

### Error Codes

| Code | Description                                                      |
| ---- | ---------------------------------------------------------------- |
| 400  | Bad Request: invalid or missing parameters, CORS issues          |
| 401  | Unauthorized: expired OAuth session, disabled or invalid API key |
| 402  | Payment Required: insufficient credits. Add funds and retry.     |
| 403  | Forbidden: input flagged by moderation                           |
| 408  | Request Timeout: request exceeded time limit                     |
| 429  | Too Many Requests: rate limit exceeded                           |
| 502  | Bad Gateway: model unavailable or returned invalid response      |
| 503  | Service Unavailable: no provider meets your routing requirements |

### Moderation Errors

When content is flagged, `error.metadata` provides details:

```typescript
type ModerationErrorMetadata = {
  reasons: string[]; // Why your input was flagged
  flagged_input: string; // The text segment that was flagged, limited to 100 characters. If the flagged input is longer than 100 characters, it will be truncated in the middle and replaced with ...
  provider_name: string; // The name of the provider that requested moderation
  model_slug: string;
};
```

### Provider Errors

When a provider encounters an error, `error.metadata` contains:

```typescript
type ProviderErrorMetadata = {
  provider_name: string; // The name of the provider that encountered the error
  raw: unknown; // The raw error from the provider
};
```

### Empty Responses

The model may occasionally return no content. Typical causes:

* Cold start initialization periods
* Infrastructure scaling to handle load

Warm-up times range from seconds to several minutes depending on the model and provider.

For persistent issues, implement retry logic or switch to a different provider or model with recent activity.

{% hint style="info" %}
Upstream providers may charge for prompt processing even when no content is generated.
{% endhint %}

### Streaming Error Formats

Streaming mode (`stream: true`) handles errors differently based on timing.

#### Pre-Stream Errors

Errors occurring before any tokens are sent follow the standard format with appropriate HTTP status codes.

#### Mid-Stream Errors

Errors after streaming begins arrive as SSE events with a unified structure:

```typescript
type MidStreamError = {
  id: string;
  object: 'chat.completion.chunk';
  created: number;
  model: string;
  provider: string;
  error: {
    code: string | number;
    message: string;
  };
  choices: [{
    index: 0;
    delta: { content: '' };
    finish_reason: 'error';
    native_finish_reason?: string;
  }];
};
```

Example SSE data:

```
data: {"id":"cmpl-abc123","object":"chat.completion.chunk","created":1234567890,"model":"anthropic/claude-3-sonnet","provider":"anthropic","error":{"code":"server_error","message":"Provider disconnected"},"choices":[{"index":0,"delta":{"content":""},"finish_reason":"error"}]}
```

Key characteristics:

* Error appears at the top level alongside standard fields
* `choices` array with `finish_reason: "error"` terminates the stream
* HTTP status remains `200 OK` since headers were already sent
* Stream ends after this event

### Responses API Error Events

The Responses API (`/api/alpha/responses`) uses typed events for streaming errors:

1. **`response.failed`** - Official failure event

   ```json
   {
     "type": "response.failed",
     "response": {
       "id": "resp_abc123",
       "status": "failed",
       "error": {
         "code": "server_error",
         "message": "Internal server error"
       }
     }
   }
   ```
2. **`response.error`** - Error during response generation

   ```json
   {
     "type": "response.error",
     "error": {
       "code": "rate_limit_exceeded",
       "message": "Rate limit exceeded"
     }
   }
   ```
3. **`error`** - Plain error event (undocumented but sent by OpenAI)

   ```json
   {
     "type": "error",
     "error": {
       "code": "invalid_api_key",
       "message": "Invalid API key provided"
     }
   }
   ```

#### Error Code Transformations

The Responses API converts certain errors into successful completions:

<table><thead><tr><th width="232.79168701171875">Error Code</th><th>Transformed To</th><th>Finish Reason</th></tr></thead><tbody><tr><td><code>context_length_exceeded</code></td><td>Success</td><td><code>length</code></td></tr><tr><td><code>max_tokens_exceeded</code></td><td>Success</td><td><code>length</code></td></tr><tr><td><code>token_limit_exceeded</code></td><td>Success</td><td><code>length</code></td></tr><tr><td><code>string_too_long</code></td><td>Success</td><td><code>length</code></td></tr></tbody></table>

This allows graceful handling of limit-based errors without treating them as failures.

### API-Specific Error Handling

#### Chat Completions API (`/api/v1/chat/completions`)

* **No tokens sent**: Returns standalone `ErrorResponse`
* **Some tokens sent**: Embeds error in the final response's `choices` array
* **Streaming**: Errors delivered as SSE events with top-level `error` field

#### Responses API (`/api/alpha/responses`)

* **Error transformations**: Certain errors become successful responses with appropriate finish reasons
* **Streaming events**: Uses typed events (`response.failed`, `response.error`, `error`)
* **Graceful degradation**: Handles provider-specific errors with fallback behavior

#### Error Type Definitions

```typescript
// Standard error response
interface ErrorResponse {
  error: {
    code: number;
    message: string;
    metadata?: Record<string, unknown>;
  };
}

// Mid-stream error with completion data
interface StreamErrorChunk {
  error: {
    code: string | number;
    message: string;
  };
  choices: Array<{
    delta: { content: string };
    finish_reason: 'error';
    native_finish_reason: string;
  }>;
}

// Responses API error event
interface ResponsesAPIErrorEvent {
  type: 'response.failed' | 'response.error' | 'error';
  error?: {
    code: string;
    message: string;
  };
  response?: {
    id: string;
    status: 'failed';
    error: {
      code: string;
      message: string;
    };
  };
}
```

### Debugging

The InteractiveAI Router provides a debug option that reveals the exact request body sent to the upstream provider. This helps you understand how your parameters are transformed for different providers.

#### Debug Option Schema

```typescript
type DebugOptions = {
  echo_upstream_body?: boolean; // If true, returns the transformed request body sent to the provider
};
```

#### Enabling Debug Output

Add the `debug` parameter to your request:

{% tabs %}
{% tab title="TypeScript" %}

```typescript
fetch('https://app.interactive.ai/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    Authorization: 'Bearer <LLMROUTER_API_KEY>',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'anthropic/claude-3-haiku',
    stream: true, // Debug only works with streaming
    messages: [
      { role: 'system', content: 'You are a data analysis assistant.' },
      { role: 'user', content: 'Parse this CSV and identify anomalies.' },
    ],    ],
    debug: {
      echo_upstream_body: true,
    },
  }),
});

const text = await response.text();

for (const line of text.split('\n')) {
  if (!line.startsWith('data: ')) continue;

  const data = line.slice(6);
  if (data === '[DONE]') break;

  const parsed = JSON.parse(data);

  if (parsed.debug?.echo_upstream_body) {
    console.log('\nDebug:', JSON.stringify(parsed.debug.echo_upstream_body, null, 2));
  }

  process.stdout.write(parsed.choices?.[0]?.delta?.content ?? '');
}
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import json

response = requests.post(
  url="https://app.interactive.ai/api/v1/chat/completions",
  headers={
    "Authorization": "Bearer <LLMROUTER_API_KEY>",
    "Content-Type": "application/json",
  },
  data=json.dumps({
    "model": "anthropic/claude-3-haiku",
    "stream": True,
    "messages": [
            {"role": "system", "content": "You are a data analysis assistant."},
            {"role": "user", "content": "Parse this CSV and identify anomalies."}
        ],
    "debug": {
      "echo_upstream_body": True
    }
  }),
  stream=True
)

for line in response.iter_lines():
  if line:
    text = line.decode('utf-8')
    if 'echo_upstream_body' in text:
      print(text)
```

{% endtab %}
{% endtabs %}

#### Debug Response Format

With `debug.echo_upstream_body` enabled, the first streaming chunk contains an empty `choices` array and a `debug` field with the transformed request:

```json
{
  "id": "gen-xxxxx",
  "provider": "Anthropic",
  "model": "anthropic/claude-3-haiku",
  "object": "chat.completion.chunk",
  "created": 1234567890,
  "choices": [],
  "debug": {
    "echo_upstream_body": {
      "system": [
        { "type": "text", "text": "You are a helpful assistant." }
      ],
      "messages": [
        { "role": "user", "content": "Parse this CSV and identify anomalies." }
      ],
      "model": "claude-3-haiku-20240307",
      "stream": true,
      "max_tokens": 64000,
      "temperature": 1
    }
  }
}
```

#### Constraints

**Streaming Only**: Debug output works exclusively with streaming requests (`stream: true`) on the Chat Completions API. Non-streaming requests and Responses API requests ignore the debug parameter.

**Development Use Only**: Do not enable debug mode in production. It may expose sensitive request data that should remain private.

#### Use Cases

Debug output helps with:

1. **Inspecting Parameter Transformations**: Observe how the Router converts your parameters into provider-specific formats, including `max_tokens` handling and `temperature` mapping.
2. **Validating Message Formatting**: Review how the Router structures and combines messages for each provider, such as system message concatenation or user message merging.
3. **Identifying Applied Defaults**: Discover which default values the Router injects when you omit parameters from your request.
4. **Troubleshooting Provider Fallbacks**: When fallbacks are configured, a debug chunk is emitted for **each provider attempt**, letting you trace which providers were contacted and what payload each received.
