# Prompts

Prompts are the **instructions** that drive your AI system's behavior. The Prompt Repository provides infrastructure for managing these instructions independently of your application code: version prompts, test changes, and make updates without redeploying your application.

### Why Prompt Management Matters

Hardcoding prompts in application code creates friction. Every change requires a code deployment, making iteration slow and risky. The Prompt Repository solves this by decoupling prompt content from application logic:

* **Iterate without deploying**: Update prompts instantly through the UI or SDK without touching your codebase.
* **Version everything**: Every change creates a new version with full history, enabling rollback and comparison.
* **Control releases with labels**: Use labels like `production` and `staging` to control which version your application fetches.
* **Track usage**: See exactly which prompts are being used in production through linked generations.
* **Test before shipping**: Use the [Playground](https://docs.interactive.ai/build/playground) to validate changes before promoting to production.

***

### Prompts Overview

The Prompt Repository is your central library for all prompts.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FeWnDKEHcarOt2GmNJHHQ%2Fimage.png?alt=media&#x26;token=f1d40762-8a58-4043-a994-830b41dba5ac" alt=""><figcaption></figcaption></figure></div>

Here you will see displayed all the prompts in your project with the following properties:

| Property         | Description                                                                              |
| ---------------- | ---------------------------------------------------------------------------------------- |
| **Name**         | Prompt identifier. Use `/` notation for folder organization (e.g., `support/escalation`) |
| **Versions**     | Number of versions saved for this prompt                                                 |
| **Type**         | `text` (single string template) or `chat` (message sequence with roles)                  |
| **Created**      | Timestamp of the most recent version                                                     |
| **Observations** | Count of traced generations that reference this prompt                                   |
| **Tags**         | Keywords for filtering and categorization                                                |

***

### Prompt Types

InteractiveAI supports **two prompt types: Text Prompts** and **Chat Prompts**, each suited to different use cases.

{% tabs %}
{% tab title="Text Prompts" %}
Text prompts are **single-string templates** ideal for completion-style tasks, simple instructions, or any scenario where you need a single block of text with variable placeholders.

```python
interactiveai.create_prompt(
    name="policy-check",
    type="text",
    prompt=(
        "You are a compliance reviewer.\n\n"
        "Policy excerpt:\n"
        "{{policy}}\n\n"
        "Content to review:\n"
        "{{content}}\n\n"
        "Return:\n"
        "- Verdict: compliant | needs_changes | non_compliant\n"
        "- Issues: bullet list\n"
        "- Proposed edits: concrete rewrite suggestions"
    ),
    labels=["production"],
)
```

In the UI, text prompts are displayed as a single system message.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FJU1Oi1vU2sKoQIYTs5tS%2Fimage.png?alt=media&#x26;token=4b31f91f-3f33-411d-bf20-284be3dc7667" alt=""><figcaption></figcaption></figure></div>
{% endtab %}

{% tab title="Chat Prompts" %}
Chat prompts define **an ordered sequence of messages** with roles (`system`, `user`, `assistant`). Use chat prompts when working with conversational models or when you need to establish context through multi-turn examples.

```python
interactiveai.create_prompt(
    name="compliance-review-chat",
    type="chat",
    prompt=[
        {"role": "system", "content": "You are a compliance reviewer."},
        {
            "role": "user",
            "content": (
                "Check this content against the policy excerpt.\n\n"
                "Policy excerpt: {{policy_excerpt}}\n"
                "Content: {{content}}\n\n"
                "Start your answer with one word: compliant, needs_changes, or non_compliant."
            ),
        },
    ],
    labels=["production"],
)
```

In the UI, chat prompts display each message in a structured view with the role clearly indicated, making it easy to review the conversation flow.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FwNbSRfeaGM7a2dst6P5f%2Fimage.png?alt=media&#x26;token=fb7993b4-2cb1-406b-aa9c-413082ce46ae" alt=""><figcaption></figcaption></figure></div>
{% endtab %}
{% endtabs %}

### Variables

Use `{{variable_name}}` syntax to insert **dynamic values** into your prompts. Variables must contain only alphabetical characters and underscores.

When you define a prompt with variables, the InteractiveAI system automatically detects and displays them in the Variables section of the prompt detail view. At runtime, your application provides values for these variables when fetching and compiling the prompt.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2F8btVbR1dp0dZJeqgnCj7%2Fimage.png?alt=media&#x26;token=a3b9cf40-07e8-45b8-84db-d3cc4629e56a" alt=""><figcaption></figcaption></figure></div>

**Variable naming rules:**

* Use only letters (a-z, A-Z) and underscores
* No numbers, spaces, or special characters
* Case-sensitive: `{{Policy}}` and `{{policy}}` are different variables

***

### Creating Prompts

{% tabs %}
{% tab title="Via InteractiveAI Platform" %}

1. Navigate to **Orchestration → Prompts**
2. Click **+ New Prompt**
3. Configure the prompt:
   * **Name**: Identifier for the prompt. Use `/` for folder organization.
   * **Type**: Select `Text` or `Chat`
   * **Prompt**: Define your template with `{{variable}}` placeholders
   * **Config**: Optional JSON for LLM parameters, function definitions, or metadata
   * **Labels**: Assign labels to control deployment&#x20;
   * **Commit Message**: Describe the changes for version history
4. Click **Create**
   {% endtab %}

{% tab title="Via InteractiveAI SDK" %}
Use the  `create_prompt()` method to create prompts programmatically:

```python
# Create a text prompt
interactiveai.create_prompt(
    name="document-summarizer",
    type="text",
    prompt=(
        "Summarize the following document in {{length}} sentences.\n\n"
        "Document:\n{{document}}\n\n"
        "Summary:"
    ),
    labels=["production"],
)

# Create a chat prompt
interactiveai.create_prompt(
    name="customer-support/greeting",
    type="chat",
    prompt=[
        {"role": "system", "content": "You are a helpful customer support agent for {{company_name}}."},
        {"role": "user", "content": "{{customer_message}}"},
    ],
    labels=["staging"],
)
```

If you call `create_prompt()` with a name that already exists, the SDK creates a new version of that prompt rather than failing. This allows you to use the same code for both initial creation and subsequent updates by simply modifying the `prompt` content and run again to create a new version. You cannot modify an existing version.
{% endtab %}
{% endtabs %}

***

### Labels

Labels control which version your application fetches at runtime. They act as **pointers** that can be moved between versions without changing your application code.

#### Default Labels

| Label        | Description                                                    |
| ------------ | -------------------------------------------------------------- |
| `production` | The live version fetched by default when no label is specified |
| `latest`     | Automatically assigned to the most recent version              |

#### Custom Labels

Create custom labels for your workflow needs. Common patterns include:

* **Environment-based**: `default`, `development`, `qa`
* **Feature-based**: `experiment-a`, `new-tone`, `v2-test`
* **Team-based**: `review-pending`, `approved`

To assign labels to a version:

{% tabs %}
{% tab title="Via InteractiveAI Platform" %}

1. Open the prompt detail view
2. Click the label icon next to the version
3. Select existing labels or click **+ Add custom label** to create new ones
4. Click **Save**

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2F61IyeJCaDNL1qBEQxazf%2FClipboard-20260204-171209-164.gif?alt=media&#x26;token=ccec98f7-e3d6-4b1b-a71f-f36cbada8373" alt=""><figcaption></figcaption></figure></div>
{% endtab %}

{% tab title="Via InteractiveAI SDK" %}
Use `update_prompt()` to assign labels programmatically:

```python
interactiveai.update_prompt(
    name="policy-check",
    version=1,
    new_labels=["label-1", "label-2"],  # Assign these labels to version 1
)
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
Labels can be marked as **Protected Prompt Layers** to prevent accidental changes to critical prompts. To know more about this please refer to [Protected Prompt Labels](https://docs.interactive.ai/settings/protected-prompt-labels).
{% endhint %}

***

### Fetching Prompts

Retrieve prompts in your application using `get_prompt()`:

```python
# Get production version (default)
prompt = interactiveai.get_prompt("policy-check")

# Get specific version by number
prompt = interactiveai.get_prompt("policy-check", version=1)

# Get specific label
prompt = interactiveai.get_prompt("policy-check", label="staging")

# Get a chat prompt (must specify type="chat")
prompt = interactiveai.get_prompt("compliance-review-chat", type="chat")
```

When fetching a chat prompt, you must specify `type="chat"`. If omitted, the SDK defaults to `type="text"` and returns a `TextPromptClient`.

***

### Prompt References

Link prompts together using the **+ Add prompt reference** feature. This allows you to compose complex prompts from reusable components.

When you reference another prompt:

* The referenced prompt's content is included at that position
* Changes to the referenced prompt automatically propagate
* Only text prompts can be referenced

This is useful for maintaining consistent instructions (e.g., output format, tone guidelines) across multiple prompts.

***

### Prompt Detail View

Click any prompt to open its detail view, which contains four main tabs:

#### Prompt Tab

Displays the full prompt content with syntax highlighting for variables. For chat prompts, each message appears in a structured card showing the role and content.

#### Config Tab

Shows the arbitrary JSON configuration attached to the prompt version. Use this for:

* LLM parameters (temperature, max\_tokens)
* Function/tool definitions
* Custom metadata for your application

#### Linked Generations Tab

Displays all observations (LLM generations) that used this prompt. Linked generations are tracked automatically when your SDK calls reference the prompt, so use this to understand how prompts perform in production.

#### Use Prompt Tab

Provides ready-to-use code snippets for fetching the prompt in your application:

```python
from interactiveai import Interactive

# Initialize Interactive client
interactiveai = Interactive()

# Get production prompt
prompt = interactiveai.get_prompt("compliance-review-chat")

# Get by label
# You can use as many labels as you'd like to identify different deployment targets
prompt = interactiveai.get_prompt("compliance-review-chat", label="production")
prompt = interactiveai.get_prompt("compliance-review-chat", label="latest")

# Get by version number, usually not recommended as it requires code changes to deploy new prompt versions
interactiveai.get_prompt("compliance-review-chat", version=1)

```

### Translating Content

Click the translate icon in the top-right corner of the prompt detail view to translate the prompt content into your preferred language. Each section displays a "Translated to \[language]" indicator when translated. Click the icon again to revert.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FSCqBAodJ2sgtquk39hhc%2Fimage.png?alt=media&#x26;token=f9cb6f87-9b0b-4b42-bb54-f6e69bcf1056" alt=""><figcaption></figcaption></figure></div>

***

### Metrics View

Click **Metrics** in the top-right corner of the prompt detail view to see performance data across all versions:

| Metric                   | Description                                    |
| ------------------------ | ---------------------------------------------- |
| **Version**              | Version number                                 |
| **Labels**               | Labels assigned to this version                |
| **Median Latency**       | Typical response time                          |
| **Median Input Tokens**  | Typical input token count                      |
| **Median Output Tokens** | Typical output token count                     |
| **Median Cost**          | Typical cost per generation                    |
| **Generations**          | Total number of times this version was used    |
| **Last Used**            | Most recent generation timestamp               |
| **First Used**           | When this version was first used in production |

Use this view to compare performance across versions and identify regressions before promoting to production.
