Prompts
Prompts are the instructions that drive your AI system's behavior. The Prompt Repository provides infrastructure for managing these instructions independently of your application code: version prompts, test changes, and make updates without redeploying your application.
Why Prompt Management Matters
Hardcoding prompts in application code creates friction. Every change requires a code deployment, making iteration slow and risky. The Prompt Repository solves this by decoupling prompt content from application logic:
Iterate without deploying: Update prompts instantly through the UI or SDK without touching your codebase.
Version everything: Every change creates a new version with full history, enabling rollback and comparison.
Control releases with labels: Use labels like
productionandstagingto control which version your application fetches.Track usage: See exactly which prompts are being used in production through linked generations.
Test before shipping: Use the Playground to validate changes before promoting to production.
Prompts Overview
The Prompt Repository is your central library for all prompts.

Here you will see displayed all the prompts in your project with the following properties:
Name
Prompt identifier. Use / notation for folder organization (e.g., support/escalation)
Versions
Number of versions saved for this prompt
Type
text (single string template) or chat (message sequence with roles)
Created
Timestamp of the most recent version
Observations
Count of traced generations that reference this prompt
Tags
Keywords for filtering and categorization
Prompt Types
InteractiveAI supports two prompt types: Text Prompts and Chat Prompts, each suited to different use cases.
Text prompts are single-string templates ideal for completion-style tasks, simple instructions, or any scenario where you need a single block of text with variable placeholders.
In the UI, text prompts are displayed as a single system message.

Chat prompts define an ordered sequence of messages with roles (system, user, assistant). Use chat prompts when working with conversational models or when you need to establish context through multi-turn examples.
In the UI, chat prompts display each message in a structured view with the role clearly indicated, making it easy to review the conversation flow.

Variables
Use {{variable_name}} syntax to insert dynamic values into your prompts. Variables must contain only alphabetical characters and underscores.
When you define a prompt with variables, the InteractiveAI system automatically detects and displays them in the Variables section of the prompt detail view. At runtime, your application provides values for these variables when fetching and compiling the prompt.

Variable naming rules:
Use only letters (a-z, A-Z) and underscores
No numbers, spaces, or special characters
Case-sensitive:
{{Policy}}and{{policy}}are different variables
Creating Prompts
Navigate to Orchestration → Prompts
Click + New Prompt
Configure the prompt:
Name: Identifier for the prompt. Use
/for folder organization.Type: Select
TextorChatPrompt: Define your template with
{{variable}}placeholdersConfig: Optional JSON for LLM parameters, function definitions, or metadata
Labels: Assign labels to control deployment
Commit Message: Describe the changes for version history
Click Create

Use the create_prompt() method to create prompts programmatically:
If you call create_prompt() with a name that already exists, the SDK creates a new version of that prompt rather than failing. This allows you to use the same code for both initial creation and subsequent updates by simply modifying the prompt content and run again to create a new version. You cannot modify an existing version.
Labels
Labels control which version your application fetches at runtime. They act as pointers that can be moved between versions without changing your application code.
Default Labels
production
The live version fetched by default when no label is specified
latest
Automatically assigned to the most recent version
Custom Labels
Create custom labels for your workflow needs. Common patterns include:
Environment-based:
staging,development,qaFeature-based:
experiment-a,new-tone,v2-testTeam-based:
review-pending,approved
To assign labels to a version:
Open the prompt detail view
Click the label icon next to the version
Select existing labels or click + Add custom label to create new ones
Click Save

Use update_prompt() to assign labels programmatically:
Labels can be marked as Protected Prompt Layers to prevent accidental changes to critical prompts. To know more about this please refer to Protected Prompt Labels.
Fetching Prompts
Retrieve prompts in your application using get_prompt():
Prompt References
Link prompts together using the + Add prompt reference feature. This allows you to compose complex prompts from reusable components.
When you reference another prompt:
The referenced prompt's content is included at that position
Changes to the referenced prompt automatically propagate
Only text prompts can be referenced
This is useful for maintaining consistent instructions (e.g., output format, tone guidelines) across multiple prompts.
Prompt Detail View
Click any prompt to open its detail view, which contains four main tabs:
Prompt Tab
Displays the full prompt content with syntax highlighting for variables. For chat prompts, each message appears in a structured card showing the role and content.
Config Tab
Shows the arbitrary JSON configuration attached to the prompt version. Use this for:
LLM parameters (temperature, max_tokens)
Function/tool definitions
Custom metadata for your application
Linked Generations Tab
Displays all observations (LLM generations) that used this prompt. Linked generations are tracked automatically when your SDK calls reference the prompt, so use this to understand how prompts perform in production.
Use Prompt Tab
Provides ready-to-use code snippets for fetching the prompt in your application:
Metrics View
Click Metrics in the top-right corner of the prompt detail view to see performance data across all versions:
Version
Version number
Labels
Labels assigned to this version
Median Latency
Typical response time
Median Input Tokens
Typical input token count
Median Output Tokens
Typical output token count
Median Cost
Typical cost per generation
Generations
Total number of times this version was used
Last Used
Most recent generation timestamp
First Used
When this version was first used in production
Use this view to compare performance across versions and identify regressions before promoting to production.
Last updated
Was this helpful?

