# Tracing

## Overview

Create and manage traces, spans, observations, generations, and events.

The `Interactive` client wraps OpenTelemetry spans with InteractiveAI-specific metadata. Use `start_as_current_span` / `start_as_current_observation` for context-manager-based tracing and `start_span` / `start_observation` when you need manual `span.end()` control.

***

## `start_span` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L361)

Create a new span for tracing a unit of work.

This method creates a new span but does not set it as the current span in the context. To create and use a span within a context, use start\_as\_current\_span().

The created span will be the child of the current span in the context.

```python
start_span(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
) -> InteractiveAISpan
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the span (e.g., function or operation name)
* `input` — Input data for the operation (can be any JSON-serializable object)
* `output` — Output data from the operation (can be any JSON-serializable object)
* `metadata` — Additional metadata to associate with the span
* `version` — Version identifier for the code or component
* `level` — Importance level of the span (info, warning, error)
* `status_message` — Optional status message for the span

**Returns**

A InteractiveAISpan object that must be ended with .end() when the operation completes

**Example**

```python
span = interactiveai.start_span(name="process-data")
try:
    # Do work
    span.update(output="result")
finally:
    span.end()
```

***

## `start_as_current_span` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L415)

Create a new span and set it as the current span in a context manager.

This method creates a new span and sets it as the current span within a context manager. Use this method with a 'with' statement to automatically handle span lifecycle within a code block.

The created span will be the child of the current span in the context.

```python
start_as_current_span(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    end_on_exit: bool | None = None,
) -> _AgnosticContextManager[InteractiveAISpan]
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the span (e.g., function or operation name)
* `input` — Input data for the operation (can be any JSON-serializable object)
* `output` — Output data from the operation (can be any JSON-serializable object)
* `metadata` — Additional metadata to associate with the span
* `version` — Version identifier for the code or component
* `level` — Importance level of the span (info, warning, error)
* `status_message` — Optional status message for the span
* `end_on_exit` — Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks.

**Returns**

A context manager that yields a InteractiveAISpan

**Example**

```python
with interactiveai.start_as_current_span(name="process-query") as span:
    # Do work
    result = process_data()
    span.update(output=result)

    # Create a child span automatically
    with span.start_as_current_span(name="sub-operation") as child_span:
        # Do sub-operation work
        child_span.update(output="sub-result")
```

***

## `start_observation` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L623)

Create a new observation of the specified type.

This method creates a new observation but does not set it as the current span in the context. To create and use an observation within a context, use start\_as\_current\_observation().

```python
start_observation(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    as_type: Union[Literal['generation', 'embedding'], Literal['span', 'agent', 'tool', 'chain', 'retriever', 'evaluator', 'guardrail']] = 'span',
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    completion_start_time: datetime | None = None,
    model: str | None = None,
    model_parameters: Dict[str, Union[str, None, int, bool, List[str]]] | None = None,
    usage_details: Dict[str, int] | None = None,
    cost_details: Dict[str, float] | None = None,
    prompt: Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient, None] = None,
) -> Union[InteractiveAISpan, InteractiveAIGeneration, InteractiveAIAgent, InteractiveAITool, InteractiveAIChain, InteractiveAIRetriever, InteractiveAIEvaluator, InteractiveAIEmbedding, InteractiveAIGuardrail]
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the observation
* `as_type` — Type of observation to create (defaults to "span")
* `input` — Input data for the operation
* `output` — Output data from the operation
* `metadata` — Additional metadata to associate with the observation
* `version` — Version identifier for the code or component
* `level` — Importance level of the observation
* `status_message` — Optional status message for the observation
* `completion_start_time` — When the model started generating (for generation types)
* `model` — Name/identifier of the AI model used (for generation types)
* `model_parameters` — Parameters used for the model (for generation types)
* `usage_details` — Token usage information (for generation types)
* `cost_details` — Cost information (for generation types)
* `prompt` — Associated prompt template (for generation types)

**Returns**

An observation object of the appropriate type that must be ended with .end()

***

## `start_as_current_observation` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1146)

Create a new observation and set it as the current span in a context manager.

This method creates a new observation of the specified type and sets it as the current span within a context manager. Use this method with a 'with' statement to automatically handle the observation lifecycle within a code block.

The created observation will be the child of the current span in the context.

```python
start_as_current_observation(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    as_type: Union[Literal['generation', 'embedding'], Literal['span', 'agent', 'tool', 'chain', 'retriever', 'evaluator', 'guardrail']] = 'span',
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    completion_start_time: datetime | None = None,
    model: str | None = None,
    model_parameters: Dict[str, Union[str, None, int, bool, List[str]]] | None = None,
    usage_details: Dict[str, int] | None = None,
    cost_details: Dict[str, float] | None = None,
    prompt: Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient, None] = None,
    end_on_exit: bool | None = None,
) -> Union[_AgnosticContextManager[InteractiveAIGeneration], _AgnosticContextManager[InteractiveAISpan], _AgnosticContextManager[InteractiveAIAgent], _AgnosticContextManager[InteractiveAITool], _AgnosticContextManager[InteractiveAIChain], _AgnosticContextManager[InteractiveAIRetriever], _AgnosticContextManager[InteractiveAIEvaluator], _AgnosticContextManager[InteractiveAIEmbedding], _AgnosticContextManager[InteractiveAIGuardrail]]
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the observation (e.g., function or operation name)
* `as_type` — Type of observation to create (defaults to "span")
* `input` — Input data for the operation (can be any JSON-serializable object)
* `output` — Output data from the operation (can be any JSON-serializable object)
* `metadata` — Additional metadata to associate with the observation
* `version` — Version identifier for the code or component
* `level` — Importance level of the observation (info, warning, error)
* `status_message` — Optional status message for the observation
* `end_on_exit` — Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks.
* `completion_start_time` — When the model started generating the response
* `model` — Name/identifier of the AI model used (e.g., "gpt-4")
* `model_parameters` — Parameters used for the model (e.g., temperature, max\_tokens)
* `usage_details` — Token usage information (e.g., prompt\_tokens, completion\_tokens)
* `cost_details` — Cost information for the model call
* `prompt` — Associated prompt template from InteractiveAI prompt management

**Returns**

A context manager that yields the appropriate observation type based on as\_type

**Example**

```python
# Create a span
with interactiveai.start_as_current_observation(name="process-query", as_type="span") as span:
    # Do work
    result = process_data()
    span.update(output=result)

    # Create a child span automatically
    with span.start_as_current_span(name="sub-operation") as child_span:
        # Do sub-operation work
        child_span.update(output="sub-result")

# Create a tool observation
with interactiveai.start_as_current_observation(name="web-search", as_type="tool") as tool:
    # Do tool work
    results = search_web(query)
    tool.update(output=results)

# Create a generation observation
with interactiveai.start_as_current_observation(
    name="answer-generation",
    as_type="generation",
    model="gpt-4"
) as generation:
    # Generate answer
    response = llm.generate(...)
    generation.update(output=response)
```

***

## `start_generation` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L800)

> **Deprecated:** This method is deprecated and will be removed in a future version. Use start\_observation(as\_type='generation') instead.

Create a new generation span for model generations.

This method creates a specialized span for tracking model generations. It includes additional fields specific to model generations such as model name, token usage, and cost details.

The created generation span will be the child of the current span in the context.

```python
start_generation(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    completion_start_time: datetime | None = None,
    model: str | None = None,
    model_parameters: Dict[str, Union[str, None, int, bool, List[str]]] | None = None,
    usage_details: Dict[str, int] | None = None,
    cost_details: Dict[str, float] | None = None,
    prompt: Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient, None] = None,
) -> InteractiveAIGeneration
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the generation operation
* `input` — Input data for the model (e.g., prompts)
* `output` — Output from the model (e.g., completions)
* `metadata` — Additional metadata to associate with the generation
* `version` — Version identifier for the model or component
* `level` — Importance level of the generation (info, warning, error)
* `status_message` — Optional status message for the generation
* `completion_start_time` — When the model started generating the response
* `model` — Name/identifier of the AI model used (e.g., "gpt-4")
* `model_parameters` — Parameters used for the model (e.g., temperature, max\_tokens)
* `usage_details` — Token usage information (e.g., prompt\_tokens, completion\_tokens)
* `cost_details` — Cost information for the model call
* `prompt` — Associated prompt template from InteractiveAI prompt management

**Returns**

A InteractiveAIGeneration object that must be ended with .end() when complete

**Example**

```python
generation = interactiveai.start_generation(
    name="answer-generation",
    model="gpt-4",
    input={"prompt": "Explain quantum computing"},
    model_parameters={"temperature": 0.7}
)
try:
    # Call model API
    response = llm.generate(...)

    generation.update(
        output=response.text,
        usage_details={
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens
        }
    )
finally:
    generation.end()
```

***

## `start_as_current_generation` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L895)

> **Deprecated:** This method is deprecated and will be removed in a future version. Use start\_as\_current\_observation(as\_type='generation') instead.

Create a new generation span and set it as the current span in a context manager.

This method creates a specialized span for model generations and sets it as the current span within a context manager. Use this method with a 'with' statement to automatically handle the generation span lifecycle within a code block.

The created generation span will be the child of the current span in the context.

```python
start_as_current_generation(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    completion_start_time: datetime | None = None,
    model: str | None = None,
    model_parameters: Dict[str, Union[str, None, int, bool, List[str]]] | None = None,
    usage_details: Dict[str, int] | None = None,
    cost_details: Dict[str, float] | None = None,
    prompt: Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient, None] = None,
    end_on_exit: bool | None = None,
) -> _AgnosticContextManager[InteractiveAIGeneration]
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the generation operation
* `input` — Input data for the model (e.g., prompts)
* `output` — Output from the model (e.g., completions)
* `metadata` — Additional metadata to associate with the generation
* `version` — Version identifier for the model or component
* `level` — Importance level of the generation (info, warning, error)
* `status_message` — Optional status message for the generation
* `completion_start_time` — When the model started generating the response
* `model` — Name/identifier of the AI model used (e.g., "gpt-4")
* `model_parameters` — Parameters used for the model (e.g., temperature, max\_tokens)
* `usage_details` — Token usage information (e.g., prompt\_tokens, completion\_tokens)
* `cost_details` — Cost information for the model call
* `prompt` — Associated prompt template from InteractiveAI prompt management
* `end_on_exit` — Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks.

**Returns**

A context manager that yields a InteractiveAIGeneration

**Example**

```python
with interactiveai.start_as_current_generation(
    name="answer-generation",
    model="gpt-4",
    input={"prompt": "Explain quantum computing"}
) as generation:
    # Call model API
    response = llm.generate(...)

    # Update with results
    generation.update(
        output=response.text,
        usage_details={
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens
        }
    )
```

***

## `update_current_generation` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1518)

Update the current active generation span with new information.

This method updates the current generation span in the active context with additional information. It's useful for adding output, usage stats, or other details that become available during or after model generation.

```python
update_current_generation(
    *,
    name: str | None = None,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
    completion_start_time: datetime | None = None,
    model: str | None = None,
    model_parameters: Dict[str, Union[str, None, int, bool, List[str]]] | None = None,
    usage_details: Dict[str, int] | None = None,
    cost_details: Dict[str, float] | None = None,
    prompt: Union[TextPromptClient, ChatPromptClient, RoutinePromptClient, PolicyPromptClient, VariablePromptClient, GlossaryPromptClient, MacroPromptClient, None] = None,
) -> None
```

**Parameters**

* `name` — The generation name
* `input` — Updated input data for the model
* `output` — Output from the model (e.g., completions)
* `metadata` — Additional metadata to associate with the generation
* `version` — Version identifier for the model or component
* `level` — Importance level of the generation (info, warning, error)
* `status_message` — Optional status message for the generation
* `completion_start_time` — When the model started generating the response
* `model` — Name/identifier of the AI model used (e.g., "gpt-4")
* `model_parameters` — Parameters used for the model (e.g., temperature, max\_tokens)
* `usage_details` — Token usage information (e.g., prompt\_tokens, completion\_tokens)
* `cost_details` — Cost information for the model call
* `prompt` — Associated prompt template from InteractiveAI prompt management

**Example**

```python
with interactiveai.start_as_current_generation(name="answer-query") as generation:
    # Initial setup and API call
    response = llm.generate(...)

    # Update with results that weren't available at creation time
    interactiveai.update_current_generation(
        output=response.text,
        usage_details={
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens
        }
    )
```

***

## `update_current_span` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1603)

Update the current active span with new information.

This method updates the current span in the active context with additional information. It's useful for adding outputs or metadata that become available during execution.

```python
update_current_span(
    *,
    name: str | None = None,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
) -> None
```

**Parameters**

* `name` — The span name
* `input` — Updated input data for the operation
* `output` — Output data from the operation
* `metadata` — Additional metadata to associate with the span
* `version` — Version identifier for the code or component
* `level` — Importance level of the span (info, warning, error)
* `status_message` — Optional status message for the span

**Example**

```python
with interactiveai.start_as_current_span(name="process-data") as span:
    # Initial processing
    result = process_first_part()

    # Update with intermediate results
    interactiveai.update_current_span(metadata={"intermediate_result": result})

    # Continue processing
    final_result = process_second_part(result)

    # Final update
    interactiveai.update_current_span(output=final_result)
```

***

## `update_current_trace` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1672)

Update the current trace with additional information.

```python
update_current_trace(
    *,
    name: str | None = None,
    user_id: str | None = None,
    session_id: str | None = None,
    version: str | None = None,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    tags: List[str] | None = None,
    public: bool | None = None,
) -> None
```

**Parameters**

* `name` — Updated name for the InteractiveAI trace
* `user_id` — ID of the user who initiated the InteractiveAI trace
* `session_id` — Session identifier for grouping related InteractiveAI traces
* `version` — Version identifier for the application or service
* `input` — Input data for the overall InteractiveAI trace
* `output` — Output data from the overall InteractiveAI trace
* `metadata` — Additional metadata to associate with the InteractiveAI trace
* `tags` — List of tags to categorize the InteractiveAI trace
* `public` — Whether the InteractiveAI trace should be publicly accessible

***

## `create_event` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1733)

Create a new InteractiveAI observation of type 'EVENT'.

The created InteractiveAI Event observation will be the child of the current span in the context.

```python
create_event(
    *,
    trace_context: TraceContext | None = None,
    name: str,
    input: Any | None = None,
    output: Any | None = None,
    metadata: Any | None = None,
    version: str | None = None,
    level: Literal['DEBUG', 'DEFAULT', 'WARNING', 'ERROR'] | None = None,
    status_message: str | None = None,
) -> InteractiveAIEvent
```

**Parameters**

* `trace_context` — Optional context for connecting to an existing trace
* `name` — Name of the span (e.g., function or operation name)
* `input` — Input data for the operation (can be any JSON-serializable object)
* `output` — Output data from the operation (can be any JSON-serializable object)
* `metadata` — Additional metadata to associate with the span
* `version` — Version identifier for the code or component
* `level` — Importance level of the span (info, warning, error)
* `status_message` — Optional status message for the span

**Returns**

The InteractiveAI Event object

**Example**

```python
event = interactiveai.create_event(name="process-event")
```

***

## `create_trace_id` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1849)

Create a unique trace ID for use with InteractiveAI.

This method generates a unique trace ID for use with various InteractiveAI APIs. It can either generate a random ID or create a deterministic ID based on a seed string.

Trace IDs must be 32 lowercase hexadecimal characters, representing 16 bytes. This method ensures the generated ID meets this requirement. If you need to correlate an external ID with a InteractiveAI trace ID, use the external ID as the seed to get a valid, deterministic InteractiveAI trace ID.

```python
create_trace_id(
    *,
    seed: str | None = None,
) -> str
```

**Parameters**

* `seed` — Optional string to use as a seed for deterministic ID generation. If provided, the same seed will always produce the same ID. If not provided, a random ID will be generated.

**Returns**

A 32-character lowercase hexadecimal string representing the InteractiveAI trace ID.

**Example**

```python
# Generate a random trace ID
trace_id = interactiveai.create_trace_id()

# Generate a deterministic ID based on a seed
session_trace_id = interactiveai.create_trace_id(seed="session-456")

# Correlate an external ID with a InteractiveAI trace ID
external_id = "external-system-123456"
correlated_trace_id = interactiveai.create_trace_id(seed=external_id)

# Use the ID with trace context
with interactiveai.start_as_current_span(
    name="process-request",
    trace_context={"trace_id": trace_id}
) as span:
    # Operation will be part of the specific trace
    pass
```

***

## `get_current_trace_id` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L2202)

Get the trace ID of the current active span.

This method retrieves the trace ID from the currently active span in the context. It can be used to get the trace ID for referencing in logs, external systems, or for creating related operations.

```python
get_current_trace_id() -> str | None
```

**Returns**

The current trace ID as a 32-character lowercase hexadecimal string, or None if there is no active span.

**Example**

```python
with interactiveai.start_as_current_span(name="process-request") as span:
    # Get the current trace ID for reference
    trace_id = interactiveai.get_current_trace_id()

    # Use it for external correlation
    log.info(f"Processing request with trace_id: {trace_id}")

    # Or pass to another system
    external_system.process(data, trace_id=trace_id)
```

***

## `get_current_observation_id` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L2236)

Get the observation ID (span ID) of the current active span.

This method retrieves the observation ID from the currently active span in the context. It can be used to get the observation ID for referencing in logs, external systems, or for creating scores or other related operations.

```python
get_current_observation_id() -> str | None
```

**Returns**

The current observation ID as a 16-character lowercase hexadecimal string, or None if there is no active span.

**Example**

```python
with interactiveai.start_as_current_span(name="process-user-query") as span:
    # Get the current observation ID
    observation_id = interactiveai.get_current_observation_id()

    # Store it for later reference
    cache.set(f"query_{query_id}_observation", observation_id)

    # Process the query...
```

***

## `get_trace_url` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L2269)

Get the URL to view a trace in the InteractiveAI UI.

This method generates a URL that links directly to a trace in the InteractiveAI UI. It's useful for providing links in logs, notifications, or debugging tools.

```python
get_trace_url(
    *,
    trace_id: str | None = None,
) -> str | None
```

**Parameters**

* `trace_id` — Optional trace ID to generate a URL for. If not provided, the trace ID of the current active span will be used.

**Returns**

A URL string pointing to the trace in the InteractiveAI UI, or None if the project ID couldn't be retrieved or no trace ID is available.

**Example**

```python
# Get URL for the current trace
with interactiveai.start_as_current_span(name="process-request") as span:
    trace_url = interactiveai.get_trace_url()
    log.info(f"Processing trace: {trace_url}")

# Get URL for a specific trace
specific_trace_url = interactiveai.get_trace_url(trace_id="1234567890abcdef1234567890abcdef")
send_notification(f"Review needed for trace: {specific_trace_url}")
```
