Playground
The Playground is an interactive environment for testing prompts before committing changes to your Prompt Repository. Rather than deploying untested prompts to production and hoping for the best, you can experiment with different templates, adjust model parameters, and see results immediately all without touching your application code.
Why the Playground Matters
Deploying untested prompts to production is risky. The Playground provides a safe space to:
Iterate rapidly: Test prompt variations and see results in seconds
Validate before shipping: Catch issues before they affect users
Experiment with models: Compare how different providers and models respond to the same prompt
Test variable combinations: Verify your prompt handles different input values correctly
Debug production issues: Load prompts from traces to reproduce and diagnose problems
Accessing the Playground
Navigate to Orchestration → Playground to open a fresh session. You can also load existing prompts or trace data directly into the Playground:
From the Prompt Repository:
Open a prompt detail view
Click the Test in LLM playground button (terminal icon in the top-right)
The Playground opens with your prompt pre-loaded

From a Trace:
Open any trace detail view
Click on any generation
Click the >_ Playground button
The Playground opens with the exact prompt that produced that output

This trace-to-playground workflow is particularly useful for debugging: when a production generation produces unexpected results, you can load it directly into the Playground to experiment with fixes.
Interface Overview
The Playground interface divides into three main areas that work together. These are:
The Prompt Area is the main editing space where you compose your instructions. The Playground uses a chat-based format with message rows for different roles such as System, User, and Assistant. If you need a simpler text-based format, delete the User and Assistant rows and work with just the System message.
Use {{variable_name}} syntax to define placeholders that you'll fill in from the Variables panel before running.
The Configuration Panel is located on the right-side of the screen and contains several sections:
Model
Select the InteractiveAI Router and specific model or connect an external provider
Tools
Define function-calling tools for the LLM to invoke
Structured Output
Constrain the model's response to a specific JSON schema
Variables
Enter test values for placeholders detected in your prompt
The Output Panel is located at the bottom of the screen and it displays the model's response after execution. Results return the complete response structure including the generated content.
Executing the Playground
Compose your prompt by entering messages in the prompt area, using
{{variables}}for dynamic contentSelect the LLM Router as a provder and choose your model of preference
Fill in variables with test values in the Variables section
(Optional) Configure tools or schemas if your prompt requires them
Click Submit

The model's response appears in the output panel. Review the result, adjust your prompt as needed, and run again.
Saving Prompts
Once you're satisfied with your prompt, save it to the Prompt Repository:
Click Save as prompt in the top-right corner
Choose one of:
Save as new prompt: Creates a new prompt in the repository. Enter a name for the prompt.
Save as new prompt version: Adds a new version to an existing prompt. Search for the prompt by name.
This workflow enables a natural iteration cycle: test in the Playground until the prompt behaves correctly, save it to the repository, then promote it to production using labels when you're ready.
Last updated
Was this helpful?

