# Parameters

Sampling parameters shape how the model selects tokens during generation. Include any of the parameters below in your requests to the InteractiveAI Router.

When a parameter is omitted, the Router applies default values (for instance, `temperature` defaults to `1.0`). Provider-specific parameters like `safe_prompt` for Mistral or `raw_mode` for Hyperbolic pass directly to those providers when included.

Consult the model's provider documentation to verify which parameters are supported.

### Temperature

| Property | Value         |
| -------- | ------------- |
| Key      | `temperature` |
| Type     | float         |
| Range    | 0.0 to 2.0    |
| Default  | 1.0           |

Govern randomness in model output. Lower values yield more predictable, consistent responses. Higher values produce more varied and creative output. Setting this to `0` makes responses deterministic for identical inputs.

### Top P

| Property | Value      |
| -------- | ---------- |
| Key      | `top_p`    |
| Type     | float      |
| Range    | 0.0 to 1.0 |
| Default  | 1.0        |

Limits token selection to a cumulative probability threshold. The model considers only tokens whose combined probabilities reach this value. Lower settings narrow the output distribution; the default includes all tokens. Acts as a dynamic alternative to Top K.

### Top K

| Property | Value      |
| -------- | ---------- |
| Key      | `top_k`    |
| Type     | integer    |
| Range    | 0 or above |
| Default  | 0          |

Restricts token selection to a fixed number of candidates at each step. A value of `1` forces deterministic output by always selecting the highest-probability token. The default (`0`) disables this constraint entirely.

### Frequency Penalty

| Property | Value               |
| -------- | ------------------- |
| Key      | `frequency_penalty` |
| Type     | float               |
| Range    | -2.0 to 2.0         |
| Default  | 0.0                 |

Penalizes tokens proportionally to how often they appear in the input. Higher values discourage repetition of frequent terms. Negative values encourage their reuse.

### Presence Penalty

| Property | Value              |
| -------- | ------------------ |
| Key      | `presence_penalty` |
| Type     | float              |
| Range    | -2.0 to 2.0        |
| Default  | 0.0                |

Adjusts the likelihood of repeating any token that has appeared in the input. Unlike frequency penalty, this applies equally regardless of occurrence count. Higher values reduce repetition; negative values encourage it.

### Repetition Penalty

| Property | Value                |
| -------- | -------------------- |
| Key      | `repetition_penalty` |
| Type     | float                |
| Range    | 0.0 to 2.0           |
| Default  | 1.0                  |

Discourages the model from reusing tokens from the input. Excessive values can degrade output quality, often resulting in fragmented sentences missing connector words. The penalty scales with the original token's probability.

### Min P

| Property | Value      |
| -------- | ---------- |
| Key      | `min_p`    |
| Type     | float      |
| Range    | 0.0 to 1.0 |
| Default  | 0.0        |

Establishes a minimum probability threshold relative to the top token. Tokens falling below this fraction of the highest-probability token are excluded. A value of `0.1` means only tokens with at least 10% of the top token's probability are considered.

### Top A

| Property | Value      |
| -------- | ---------- |
| Key      | `top_a`    |
| Type     | float      |
| Range    | 0.0 to 1.0 |
| Default  | 0.0        |

Filters candidates based on their probability relative to the leading token. Functions as a dynamic Top P. Lower values narrow selection toward high-confidence tokens without directly affecting creativity.

### Seed

| Property | Value   |
| -------- | ------- |
| Key      | `seed`  |
| Type     | integer |
| Required | No      |

Enables deterministic sampling. Requests with identical seeds and parameters should return identical results. Not all models guarantee determinism.

### Max Tokens

| Property | Value        |
| -------- | ------------ |
| Key      | `max_tokens` |
| Type     | integer      |
| Range    | 1 or above   |

Caps the number of tokens the model can generate in a single response. The maximum allowable value equals the model's context length minus the prompt length.

### Logit Bias

| Property | Value        |
| -------- | ------------ |
| Key      | `logit_bias` |
| Type     | object       |

A JSON object mapping token IDs to bias values between `-100` and `100`. These values modify the model's logits before sampling. Values near `-1` or `1` subtly shift selection probability. Extreme values (`-100` or `100`) effectively ban or guarantee selection of specific tokens.

### Logprobs

| Property | Value      |
| -------- | ---------- |
| Key      | `logprobs` |
| Type     | boolean    |

When `true`, the response includes log probabilities for each generated token.

### Top Logprobs

| Property | Value          |
| -------- | -------------- |
| Key      | `top_logprobs` |
| Type     | integer        |
| Range    | 0 to 20        |

Specifies how many top-probability tokens to return at each position, along with their log probabilities. Requires `logprobs` to be `true`.

### Response Format

| Property | Value             |
| -------- | ----------------- |
| Key      | `response_format` |
| Type     | object            |

Forces the model to produce output in a specific format. Setting `{ "type": "json_object" }` activates JSON mode, ensuring valid JSON output.

{% hint style="info" %}
When using JSON mode, include an instruction in your system or user message directing the model to produce JSON.&#x20;
{% endhint %}

### Structured Outputs

| Property | Value                |
| -------- | -------------------- |
| Key      | `structured_outputs` |
| Type     | boolean              |

Indicates whether the model supports structured output via `response_format` with `json_schema`.

### Stop

| Property | Value  |
| -------- | ------ |
| Key      | `stop` |
| Type     | array  |

Terminates generation immediately when the model produces any token in this array.

### Tools

| Property | Value   |
| -------- | ------- |
| Key      | `tools` |
| Type     | array   |

Defines available tools using the OpenAI tool calling format. The Router transforms this format as needed for non-OpenAI providers.

### Tool Choice

| Property | Value            |
| -------- | ---------------- |
| Key      | `tool_choice`    |
| Type     | string or object |

Controls tool invocation behavior:

* `none`: Prevents tool calls; generates a message instead
* `auto`: Model decides whether to call tools or generate a message
* `required`: Forces at least one tool call
* `{"type": "function", "function": {"name": "my_function"}}`: Forces a specific tool call

### Parallel Tool Calls

| Property | Value                 |
| -------- | --------------------- |
| Key      | `parallel_tool_calls` |
| Type     | boolean               |
| Default  | true                  |

Controls whether multiple tools can execute simultaneously. When `false`, tools execute sequentially. Only applies when tools are provided.

### Verbosity

| Property | Value             |
| -------- | ----------------- |
| Key      | `verbosity`       |
| Type     | enum              |
| Options  | low, medium, high |
| Default  | medium            |

Adjusts response length and detail. Lower values produce concise output; higher values generate more comprehensive responses.
