# Playground

The Playground is an interactive environment for **testing prompts** before committing changes to your Prompt Repository. Rather than deploying untested prompts to production and hoping for the best, you can experiment with different templates, adjust model parameters, and see results immediately all without touching your application code.

### Why the Playground Matters

Deploying untested prompts to production is risky. The Playground provides a safe space to:

* **Iterate rapidly**: Test prompt variations and see results in seconds
* **Validate before shipping**: Catch issues before they affect users
* **Experiment with models**: Compare how different providers and models respond to the same prompt
* **Test variable combinations**: Verify your prompt handles different input values correctly
* **Debug production issues**: Load prompts from traces to reproduce and diagnose problems

{% hint style="info" %}
For the full Prompts API reference including caching, fallbacks, and deletion, see the [SDK Documentation](https://app.gitbook.com/s/jHEEbkpMbUW2x51XS8Ez/prompts).
{% endhint %}

***

### Accessing the Playground

Navigate to **Build→ Playground** to open a fresh session. You can also **load existing prompts** or trace data directly into the Playground:

**From the Prompt Repository:**

1. Open a prompt detail view
2. Click the **Test in LLM playground** button (terminal icon in the top-right)
3. The Playground opens with your prompt pre-loaded

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2Fs7M5DxMRbS1RN8Bc4aBn%2FClipboard-20260204-173616-147.gif?alt=media&#x26;token=f08327ad-f3be-4506-82f2-b2b42de720e6" alt=""><figcaption></figcaption></figure></div>

**From a Trace:**

1. Open any trace detail view
2. Click on any generation
3. Click the **>\_ Playground** button
4. The Playground opens with the exact prompt that produced that output

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FuubSre2m8Bot3mKQ1GjB%2FClipboard-20260311-123649-818.gif?alt=media&#x26;token=dc5d46e3-8ceb-46f9-a1c4-eefa83dcd189" alt=""><figcaption></figcaption></figure></div>

{% hint style="info" %}
This trace-to-playground workflow is particularly useful for debugging: when a production generation produces unexpected results, you can load it directly into the Playground to experiment with fixes.
{% endhint %}

***

### Interface Overview

The Playground interface divides into three main areas that work together. These are:&#x20;

{% tabs %}
{% tab title="Prompt Area" %}
The Prompt Area is the main **editing space** where you compose your instructions. The Playground uses a chat-based format with message rows for different roles such as System, User, and Assistant. If you need a simpler text-based format, delete the User and Assistant rows and work with just the System message.

Use `{{variable_name}}` syntax to define placeholders that you'll fill in from the Variables panel before running.
{% endtab %}

{% tab title="Configuration Panel" %}
The Configuration Panel is located on the right-side of the screen and contains several sections:

| Section               | Purpose                                                                            |
| --------------------- | ---------------------------------------------------------------------------------- |
| **Model**             | Select the InteractiveAI Router and specific model or connect an external provider |
| **Tools**             | Define function-calling tools for the LLM to invoke                                |
| **Structured Output** | Constrain the model's response to a specific JSON schema                           |
| **Variables**         | Enter test values for placeholders detected in your prompt                         |
| {% endtab %}          |                                                                                    |

{% tab title="Output Panel" %}
The Output Panel is located at the bottom of the screen and it displays **the model's response** after execution. Results return the complete response structure including the generated content.
{% endtab %}
{% endtabs %}

***

### Executing the Playground

1. **Compose your prompt** by entering messages in the prompt area, using `{{variables}}` for dynamic content
2. **Select the LLM Router** as a provider and **choose your model** of preference
3. **Fill in variables** with test values in the Variables section
4. (Optional) **Configure tools or schemas** if your prompt requires them
5. Click **Submit**

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FvP1chkzod4jnpHXmbCBr%2FClipboard-20260205-111911-760.gif?alt=media&#x26;token=3b2fca9f-bec8-4cc1-84d5-619bd4324ab6" alt=""><figcaption></figcaption></figure></div>

The model's response appears in the output panel. Review the result, adjust your prompt as needed, and run again.

### Translating Content

Click the translate icon in the top-right corner of the Playground to translate message inputs and model outputs into your preferred language. Variables are not translated.

***

### Saving Prompts

Once you're satisfied with your prompt, save it to the Prompt Repository:

1. Click **Save as prompt** in the top-right corner
2. Choose one of:
   * **Save as new prompt**: Creates a new prompt in the repository. Enter a name for the prompt.
   * **Save as new prompt version**: Adds a new version to an existing prompt. Search for the prompt by name.

This workflow enables a natural iteration cycle: test in the Playground until the prompt behaves correctly, save it to the repository, then promote it to production using labels when you're ready.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.interactive.ai/build/playground.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
