Playground

The Playground is an interactive environment for testing prompts before committing changes to your Prompt Repository. Rather than deploying untested prompts to production and hoping for the best, you can experiment with different templates, adjust model parameters, and see results immediately all without touching your application code.

Why the Playground Matters

Deploying untested prompts to production is risky. The Playground provides a safe space to:

  • Iterate rapidly: Test prompt variations and see results in seconds

  • Validate before shipping: Catch issues before they affect users

  • Experiment with models: Compare how different providers and models respond to the same prompt

  • Test variable combinations: Verify your prompt handles different input values correctly

  • Debug production issues: Load prompts from traces to reproduce and diagnose problems


Accessing the Playground

Navigate to Orchestration → Playground to open a fresh session. You can also load existing prompts or trace data directly into the Playground:

From the Prompt Repository:

  1. Open a prompt detail view

  2. Click the Test in LLM playground button (terminal icon in the top-right)

  3. The Playground opens with your prompt pre-loaded

From a Trace:

  1. Open any trace detail view

  2. Click on any generation

  3. Click the >_ Playground button

  4. The Playground opens with the exact prompt that produced that output

circle-info

This trace-to-playground workflow is particularly useful for debugging: when a production generation produces unexpected results, you can load it directly into the Playground to experiment with fixes.


Interface Overview

The Playground interface divides into three main areas that work together. These are:

The Prompt Area is the main editing space where you compose your instructions. The Playground uses a chat-based format with message rows for different roles such as System, User, and Assistant. If you need a simpler text-based format, delete the User and Assistant rows and work with just the System message.

Use {{variable_name}} syntax to define placeholders that you'll fill in from the Variables panel before running.


Executing the Playground

  1. Compose your prompt by entering messages in the prompt area, using {{variables}} for dynamic content

  2. Select the LLM Router as a provder and choose your model of preference

  3. Fill in variables with test values in the Variables section

  4. (Optional) Configure tools or schemas if your prompt requires them

  5. Click Submit

The model's response appears in the output panel. Review the result, adjust your prompt as needed, and run again.


Saving Prompts

Once you're satisfied with your prompt, save it to the Prompt Repository:

  1. Click Save as prompt in the top-right corner

  2. Choose one of:

    • Save as new prompt: Creates a new prompt in the repository. Enter a name for the prompt.

    • Save as new prompt version: Adds a new version to an existing prompt. Search for the prompt by name.

This workflow enables a natural iteration cycle: test in the Playground until the prompt behaves correctly, save it to the repository, then promote it to production using labels when you're ready.

Last updated

Was this helpful?