# Prompts

## Overview

Fetch, create, and manage prompts with built-in caching.

`get_prompt` and `create_prompt` are the generic methods for `text` and `chat` prompts. For structured prompt types, use the dedicated convenience methods instead: [Routines](https://docs.interactive.ai/sdk/routines), [Policies](https://docs.interactive.ai/sdk/policies), [Variables](https://docs.interactive.ai/sdk/variables), [Glossary](https://docs.interactive.ai/sdk/glossary), and [Macros](https://docs.interactive.ai/sdk/macros).

***

## `get_prompt` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3048)

Get a prompt.

This method attempts to fetch the requested prompt from the local cache. If the prompt is not found in the cache or if the cached prompt has expired, it will try to fetch the prompt from the server again and update the cache. If fetching the new prompt fails, and there is an expired prompt in the cache, it will return the expired prompt as a fallback.

```python
get_prompt(
    name: str,
    *,
    version: int | None = None,
    label: str | None = None,
    type: Literal['chat', 'text', 'routine', 'policy', 'variable', 'glossary', 'macro'] = 'text',
    cache_ttl_seconds: int | None = None,
    fallback: Union[List[ChatMessageDict], None, str] = None,
    max_retries: int | None = None,
    fetch_timeout_seconds: int | None = None,
) -> Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient]
```

**Parameters**

* `name` — The name of the prompt to retrieve.
* `version` — The version of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both.
* `label` — Optional\[str]: The label of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both.
* `cache_ttl_seconds` — Optional\[int]: Time-to-live in seconds for caching the prompt. Must be specified as a
* `type` — The type of the prompt to retrieve. Defaults to "text". Structured types: "routine", "policy", "variable", "glossary", "macro".
* `fallback` — Union\[Optional\[List\[ChatMessageDict]], Optional\[str]]: The prompt string to return if fetching the prompt fails. Important on the first call where no cached prompt is available. Follows InteractiveAI prompt formatting with double curly braces for variables. Defaults to None.
* `max_retries` — Optional\[int]: The maximum number of retries in case of API/network errors. Defaults to 2. The maximum value is 4. Retries have an exponential backoff with a maximum delay of 10 seconds.
* `fetch_timeout_seconds` — Optional\[int]: The timeout in seconds for fetching the prompt. Defaults to the default timeout set on the SDK, which is 5 seconds per default.

**Returns**

The prompt object retrieved from the cache or directly fetched if not cached or expired of type

* TextPromptClient, if type argument is 'text'.
* ChatPromptClient, if type argument is 'chat'.
* RoutinePromptClient, if type argument is 'routine'.
* PolicyPromptClient, if type argument is 'policy'.
* VariablePromptClient, if type argument is 'variable'.
* GlossaryPromptClient, if type argument is 'glossary'.
* MacroPromptClient, if type argument is 'macro'.

***

## `list_prompts` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3275)

List prompts with their metadata, versions, and labels.

```python
list_prompts(
    *,
    name: str | None = None,
    label: str | None = None,
    tag: str | None = None,
    page: int | None = None,
    limit: int | None = None,
    type: str | None = None,
) -> PromptMetaListResponse
```

**Parameters**

* `name` — Optional name filter to narrow results.
* `label` — Optional label filter (e.g. `"production"`).
* `tag` — Optional tag filter.
* `page` — Page number, starts at 1.
* `limit` — Number of items per page.
* `type` — Optional type filter (e.g. `"routine"`, `"policy"`, `"text"`, `"chat"`).

**Returns**

`PromptMetaListResponse` containing:

* `data` — `List[PromptMeta]`, each with: `name: str`, `type: str` (`"text"`, `"chat"`, `"routine"`, `"policy"`, `"variable"`, or `"glossary"`), `versions: List[int]` (version numbers e.g. `[1, 2, 3]`, not objects), `labels: List[str]`, `tags: List[str]`, `last_updated_at: datetime`, `last_config: Any` (config from the most recent version matching filters).
* `meta` — `MetaResponse` with `page`, `limit`, `total_items`, `total_pages`.

**Example**

```python
result = client.list_prompts(label="production", limit=20)
for prompt in result.data:
    print(prompt.name, prompt.type, prompt.versions)
```

**Fetching content from list results**

`list_prompts` returns metadata only. To get the actual prompt content, use `get_prompt` with the `type` from the metadata:

```python
result = client.list_prompts()
for meta in result.data:
    prompt = client.get_prompt(meta.name, type=meta.type)
```

***

## `create_prompt` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3196)

Create a new prompt in InteractiveAI.

Keyword Args: name : The name of the prompt to be created. prompt : The content of the prompt to be created. labels: The labels of the prompt. Defaults to None. To create a default-served prompt, add the 'production' label. tags: The tags of the prompt. Defaults to None. Will be applied to all versions of the prompt. config: Additional structured data to be saved with the prompt. Defaults to None. type: The type of the prompt to be created. "chat", "text", "routine", "policy", "variable", "glossary", or "macro". Defaults to "text". commit\_message: Optional string describing the change.

```python
create_prompt(
    *,
    name: str,
    prompt: Union[str, List[Union[ChatMessageDict, ChatMessageWithPlaceholdersDict_Message, ChatMessageWithPlaceholdersDict_Placeholder]]],
    labels: List[str] | None = None,
    tags: List[str] | None = None,
    type: Literal['chat', 'text', 'routine', 'policy', 'variable', 'glossary', 'macro'] | None = 'text',
    config: Any | None = None,
    commit_message: str | None = None,
) -> Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient]
```

**Returns**

TextPromptClient: The prompt if type argument is 'text'. ChatPromptClient: The prompt if type argument is 'chat'. RoutinePromptClient: The prompt if type argument is 'routine'. PolicyPromptClient: The prompt if type argument is 'policy'. VariablePromptClient: The prompt if type argument is 'variable'. GlossaryPromptClient: The prompt if type argument is 'glossary'. MacroPromptClient: The prompt if type argument is 'macro'.

***

## `update_prompt` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3239)

Update an existing prompt version in InteractiveAI. The InteractiveAI SDK prompt cache is invalidated for all prompts with the specified name.

```python
update_prompt(
    *,
    name: str,
    version: int,
    new_labels: List[str] | None = None,
    type: Literal['chat', 'text', 'routine', 'policy', 'variable', 'glossary', 'macro'] | None = None,
) -> Any
```

**Parameters**

* `name` — The name of the prompt to update.
* `version` — The version number of the prompt to update.
* `new_labels` — New labels to assign to the prompt version. Labels are unique across versions. The "latest" label is reserved and managed by InteractiveAI. Defaults to \[].
* `type` — Prompt type (e.g. `"routine"`, `"policy"`). When set, routes through the type-specific endpoint.

**Returns**

Prompt: The updated prompt from the InteractiveAI API.

***

## `delete_prompt` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3322)

Delete prompt versions and invalidate the local prompt cache.

If neither `version` nor `label` is specified, all versions of the prompt are deleted.

```python
delete_prompt(
    name: str,
    *,
    label: str | None = None,
    version: int | None = None,
) -> None
```

**Parameters**

* `name` — The name of the prompt to delete.
* `label` — If set, deletes all versions carrying this label.
* `version` — If set, deletes only this specific version.

**Example**

```python
# Delete one version
client.delete_prompt("my-prompt", version=3)

# Delete all versions labelled "staging"
client.delete_prompt("my-prompt", label="staging")

# Delete the entire prompt
client.delete_prompt("my-prompt")
```

***

## `clear_prompt_cache` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L3266)

Clear the entire prompt cache, removing all cached prompts.

This method is useful when you want to force a complete refresh of all cached prompts, for example after major updates or when you need to ensure the latest versions are fetched from the server.

```python
clear_prompt_cache() -> None
```

***

## `validate_prompt` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L4050)

Validate prompt content against the server.

```python
validate_prompt(
    content: str,
    prompt_type: str,
    *,
    skip_schema: bool = False,
) -> Dict[str, Any]
```

**Parameters**

* `content` — The prompt content string (YAML or JSON).
* `prompt_type` — The prompt type (e.g. `"routine"`, `"policy"`).
* `skip_schema` — If `True`, skip JSON-schema validation.

**Returns**

Validation result dict from the server.

***

## Structured prompt types

Each structured type has dedicated convenience methods (e.g. `get_routine`, `create_policy`). Use those instead of passing `type=` to the generic methods.

| Type       | Convenience methods                                                                        | Docs                                                   | Content accessors                                 |
| ---------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------ | ------------------------------------------------- |
| `routine`  | `get_routine`, `create_routine`, `update_routine`, `list_routines`, `delete_routine`       | [Routines](https://docs.interactive.ai/sdk/routines)   | `.title`, `.description`, `.conditions`, `.steps` |
| `policy`   | `get_policy`, `create_policy`, `update_policy`, `list_policies`, `delete_policy`           | [Policies](https://docs.interactive.ai/sdk/policies)   | `.entries` → `List[PolicyEntry]`                  |
| `variable` | `get_variable`, `create_variable`, `update_variable`, `list_variables`, `delete_variable`  | [Variables](https://docs.interactive.ai/sdk/variables) | `.entries` → `List[VariableEntry]`                |
| `glossary` | `get_glossary`, `create_glossary`, `update_glossary`, `list_glossaries`, `delete_glossary` | [Glossary](https://docs.interactive.ai/sdk/glossary)   | `.entries` → `List[GlossaryEntry]`                |
| `macro`    | `get_macro`, `create_macro`, `update_macro`, `list_macros`, `delete_macro`                 | [Macros](https://docs.interactive.ai/sdk/macros)       | `.raw_text` → `str`                               |

All structured types share the same base attributes (`name`, `version`, `labels`, `tags`, `config`, `is_fallback`, `raw_content`) and have no template variables (`.variables` returns `[]`).
