# Traces

A trace is the fundamental **unit of observability** since it represents a complete, structured record of a single AI operation **from start to finish**. Every trace captures what went in, what came out, how long it took, what it cost, and every intermediate step along the way.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FjFh7q3RbIdguAWqpjA5o%2FClipboard-20260311-125245-264.gif?alt=media&#x26;token=91552011-5f98-4eb2-ad0c-85c89ec0893c" alt=""><figcaption></figcaption></figure></div>

### Why Traces Matter

Traditional logging falls short for AI systems because LLM behavior is non-deterministic and multi-step, meaning the same input can produce different outputs and a single user request might trigger multiple model calls, tool invocations, and decision points. Traces provide the structured, hierarchical view needed to **understand and debug** this complexity.

With comprehensive tracing, you can:

* **Pinpoint** exactly where a request failed or produced unexpected results.
* **Understand** the sequence of operations that led to a specific output.
* **Measure latency** and **cost** at each step, not just overall.
* Build datasets from real production interactions for **testing and evaluation**.
* **Identify patterns** across thousands of requests that would be invisible in traditional logs.

Click any trace to open the **detail view,** which displays the complete input/output payloads, all nested observations, timing breakdown, cost details, and any attached scores.

{% hint style="info" %}
For the full Tracing API reference including all method signatures, parameters, and advanced options, see the [SDK Documentation](https://app.gitbook.com/s/jHEEbkpMbUW2x51XS8Ez/tracing).
{% endhint %}

***

### Creating a Trace

There are two ways to create traces: using the context manager directly, or using the `@observe` decorator which automates most of the work.

{% tabs %}
{% tab title="Context Manager" %}
Use `start_as_current_observation` to create a trace explicitly:

```python
with interactiveai.start_as_current_observation(
    as_type="span",
    name="your-trace-name",
    input={"user_query": "What's the weather in Madrid?"},
) as root_span:
    root_span.update(output={"final_response": "In Madrid, it is currently partly cloudy with a temperature of around 6°C (43°F)."})

interactiveai.flush() # Ensures all pending data is sent
```

{% endtab %}

{% tab title="@observe Decorator" %}
The `@observe` decorator automatically creates traces around function calls, captures inputs and outputs, and works with both sync and async functions.

```python
from interactiveai import observe

@observe()
def get_weather_response(user_query):
    # Automatically traced with name "get_weather_response"
    # Function arguments are captured as input
    # Return value is captured as output
    weather_data = fetch_weather_api(user_query)
    return format_response(weather_data)

result = get_weather_response("What's the weather in Madrid?")
interactiveai.flush() # Ensures all pending data is sent
```

You can customize the observation name and type:

```python
@observe(name="weather-lookup")
def get_weather_response(user_query):
    return fetch_and_format(user_query)
```

Async functions are supported automatically:

```python
@observe(name="weather-lookup")
async def get_weather_response(user_query):
    return await async_fetch_and_format(user_query)
```

{% endtab %}
{% endtabs %}

***

### Updating a Trace

After creating a trace, you can update it with additional attributes like user ID, session ID, tags, and metadata.

{% tabs %}
{% tab title="Context Manager" %}
Use `update_trace()` on the span object or `update_current_trace()` on the client:

```python
# Option 1: Using the span object
with interactiveai.start_as_current_observation(as_type="span", name="user-request") as span:
    span.update_trace(
        name="weather-query",
        user_id="user-123",
        session_id="session-abc",
        tags=["production", "weather"],
        metadata={"source": "web-app", "app_version": "2.1.0"},
        input={"query": "What's the weather?"},
    )
    # Your application logic
    span.update(output={"response": "..."})

interactiveai.flush() # Ensures all pending data is sent
```

```python
# Option 2: Using update_current_trace() from anywhere within the trace
with interactiveai.start_as_current_observation(as_type="span", name="user-request") as span:
    interactiveai.update_current_trace(
        name="weather-query",
        user_id="user-123",
        session_id="session-abc",
        tags=["production", "weather"],
        metadata={"source": "web-app"},
    )

interactiveai.flush() # Ensures all pending data is sent
```

{% endtab %}

{% tab title="@observe Decorator" %}
Inside a decorated function, use `update_current_trace()` to set trace-level attributes:

```python
from interactiveai import observe

@observe()
def handle_request(user_id, query):
    interactiveai.update_current_trace(
        name="weather-query",
        user_id=user_id,
        session_id="session-abc",
        tags=["production", "weather"],
        metadata={"source": "web-app", "app_version": "2.1.0"},
    )
    return process_query(query)

handle_request("user-123", "What's the weather?")
interactiveai.flush() # Ensures all pending data is sent
```

Alternatively, use `propagate_attributes` to set attributes that automatically propagate to all child spans:

```python
from interactiveai import observe, propagate_attributes

@observe()
def handle_request(user_id, session_id, query):
    with propagate_attributes(
        user_id=user_id,
        session_id=session_id,
        tags=["production", "weather"],
        metadata={"source": "web-app"},
    ):
        # All child spans created inside this block inherit these attributes
        return process_query(query)

@observe(as_type="generation")
def process_query(query):
    # This span automatically has user_id, session_id, tags, and metadata
    return llm.generate(query)

handle_request("user-123", "session-abc", "What's the weather?")
interactiveai.flush() # Ensures all pending data is sent
```

{% hint style="warning" %}
Only spans created **after** entering the `propagate_attributes` context will inherit the attributes. Pre-existing spans are not retroactively updated.
{% endhint %}
{% endtab %}
{% endtabs %}

***

### Deterministic IDs

By default, the platform auto-generates trace IDs (32-character lowercase hexadecimal strings). However, for cases where you need **consistent identification across systems**, you can generate deterministic IDs using seed values.

Use cases for deterministic IDs:

* Deep linking to traces from your own UI or logs
* Adding scores and evaluations by referencing trace IDs
* Fetching specific traces programmatically via the SDK
* Connecting traces to external identifiers (support tickets, user requests, message IDs)

{% tabs %}
{% tab title="Context Manager" %}

```python
external_request_id = "your-trace-id"
deterministic_trace_id = interactiveai.create_trace_id(seed=external_request_id)

with interactiveai.start_as_current_observation(
    as_type="span",
    name="your-trace-name",
    input={"request_id": external_request_id, "query": "What's the weather in Madrid?"},
    trace_context={"trace_id": deterministic_trace_id}
) as root_span:
    root_span.update(output={"response": "In Madrid, it is currently partly cloudy with a temperature of around 6°C (43°F)."})

interactiveai.flush() # Ensures all pending data is sent
```

{% endtab %}

{% tab title="@observe Decorator" %}
Pass the deterministic trace ID using the special `interactiveai_trace_id` keyword argument when calling the decorated function:

```python
from interactiveai import observe

@observe()
def process_request(request_id, query):
    return get_response(query)

external_request_id = "your-trace-id"
deterministic_trace_id = interactiveai.create_trace_id(seed=external_request_id)

# Pass the trace ID as a special keyword argument
process_request(
    "your-trace-id",
    "What's the weather in Madrid?",
    interactiveai_trace_id=deterministic_trace_id
)

interactiveai.flush() # Ensures all pending data is sent
```

The `interactiveai_trace_id` argument is intercepted by the decorator and not passed to the actual function.
{% endtab %}
{% endtabs %}

{% hint style="warning" %}
Trace IDs must be unique within a project.
{% endhint %}

***

### Distributed Tracing&#x20;

Distributed tracing enables you to **correlate traces** across microservices by propagating shared Trace IDs through OpenTelemetry context. This becomes essential when your LLM application spans multiple services; for example, a Python service handling user requests that calls a Java service for database queries, which then routes back to Python for agent processing.

This unified visibility lets you:

* **Debug issues** that span service boundaries without manually correlating timestamps.
* Measure true **end-to-end latency**, not just per-service latency.
* Understand how a **single user request** propagates through your entire architecture.
* **Identify bottlenecks** that only emerge when services interact.

{% tabs %}
{% tab title="Context Manager" %}

```python
existing_trace_id = "your-trace-id"  # trace ID to link this observation to
existing_parent_span_id = "your-observation-id"  # Observation ID of the parent span you want to link to

with interactiveai.start_as_current_observation(
    as_type="span",
    name="your-trace-name",
    trace_context={
        "trace_id": existing_trace_id,
        "parent_span_id": existing_parent_span_id,
    },
):
    pass
```

{% endtab %}

{% tab title="@observe Decorator" %}
Pass the existing trace and parent span IDs using the special keyword arguments when calling the decorated function:

```python
from interactiveai import observe

@observe(name="downstream-service")
def process_in_other_service(query):
    return run_query(query)

# Link to an existing trace from another service
process_in_other_service(
    "SELECT * FROM users",
    interactiveai_trace_id="your-trace-id",
    interactiveai_parent_observation_id="your-observation-id"
)

interactiveai.flush()
```

The `interactiveai_trace_id` and `interactiveai_parent_observation_id` arguments are intercepted by the decorator and not passed to the actual function.
{% endtab %}
{% endtabs %}

***

### Trace Utilities

#### Getting Trace URLs

Generate a direct link to a trace in the InteractiveAI UI, useful for logging, notifications, or debugging tools:

```python
# Get URL for the current trace (works inside both context manager and @observe)
with interactiveai.start_as_current_observation(as_type="span", name="process-request") as span:
    trace_url = interactiveai.get_trace_url()
    print(f"View trace: {trace_url}")

# Get URL for a specific trace by ID
url = interactiveai.get_trace_url(trace_id="abc123def456...")
```

#### Getting Current Context

Retrieve the trace ID or observation ID of the currently active span. This works inside both context managers and `@observe` decorated functions:

```python
trace_id = interactiveai.get_current_trace_id()
observation_id = interactiveai.get_current_observation_id()
print(f"Trace: {trace_id}, Observation: {observation_id}")
```

***

### Translating a Trace

Click the translate icon next to the Formatted / JSON toggle in the trace detail view to translate input and output content into your preferred language. Set your translation language from the user menu under Translation. A "Translated to \[language]" badge appears briefly to confirm the translation. Click the icon again to revert to the original content.

<div data-with-frame="true"><figure><img src="https://708770081-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F1ICwJbq7EJdn5kBgXnQu%2Fuploads%2FrHAF6TGPdwjtPGijaEGa%2FScreenshot%202026-03-24%20at%2013.38.14.png?alt=media&#x26;token=6468dc4b-e20b-40a9-9e99-8930eeef7608" alt=""><figcaption></figcaption></figure></div>

***

### Properties of a Trace&#x20;

| Property                | Description                                                                                              |
| ----------------------- | -------------------------------------------------------------------------------------------------------- |
| **Id**                  | Unique identifier. Auto-generated or set manually for deterministic linking                              |
| **Trace Name**          | Human-readable label                                                                                     |
| **Timestamp**           | Creation time of the trace                                                                               |
| **Input/Output**        | JSON payloads capturing the request and response                                                         |
| **Observation Levels**  | Summary count of all the nested activities within that trace                                             |
| **Latency**             | End-to-end execution time of the trace                                                                   |
| **Tokens**              | Total token count (input + output) across all observations                                               |
| **Model Cost**          | Accumulated cost of all model calls within the trace                                                     |
| **Environment**         | Separate data from different deployment contexts like `production`, `staging`, or `development`          |
| **Tags**                | Array of strings for better categorization and filtering (e.g., \["prod", "rag", "v2"])                  |
| **Metadata**            | Free-form JSON for extra context. (e.g., `run_name`, `dataset_item_id`)                                  |
| **Scores**              | Evaluation metrics attached to the trace (e.g., quality ratings, correctness checks, custom evaluations) |
| **Session**             | Groups multiple traces into a single conversation or interaction                                         |
| **User**                | Associates the trace with a specific end user                                                            |
| **Observations**        | The total number of observations in the trace                                                            |
| **Level**               | Severity or log level (e.g., DEBUG, DEFAULT, WARNING, ERROR)                                             |
| **Version**             | Logical version of your workflow                                                                         |
| **Release**             | Associates the trace with a specific deployment                                                          |
| **Input/Output Cost**   | Accumulated cost of processing input/output tokens across all observations in the trace                  |
| **Input/Output Tokens** | Total input/output token count across all observations in the trace                                      |
| **Total Tokens**        | Combined input and output token count across all observations in the trace                               |
