Traces
A trace is the fundamental unit of observability since it represents a complete, structured record of a single AI operation from start to finish. Every trace captures what went in, what came out, how long it took, what it cost, and every intermediate step along the way.

Why Traces Matter
Traditional logging falls short for AI systems because LLM behavior is non-deterministic and multi-step, meaning the same input can produce different outputs and a single user request might trigger multiple model calls, tool invocations, and decision points. Traces provide the structured, hierarchical view needed to understand and debug this complexity.
With comprehensive tracing, you can:
Pinpoint exactly where a request failed or produced unexpected results.
Understand the sequence of operations that led to a specific output.
Measure latency and cost at each step, not just overall.
Build datasets from real production interactions for testing and evaluation.
Identify patterns across thousands of requests that would be invisible in traditional logs.
Click any trace to open the detail view, which displays the complete input/output payloads, all nested observations, timing breakdown, cost details, and any attached scores.
Creating a Trace
There are two ways to create traces: using the context manager directly, or using the @observe decorator which automates most of the work.
Use start_as_current_observation to create a trace explicitly:
The @observe decorator automatically creates traces around function calls, captures inputs and outputs, and works with both sync and async functions.
You can customize the observation name and type:
Async functions are supported automatically:
@observe decorator parameters:
name
str (optional)
Custom name for the span. Defaults to the function name
as_type
str (optional)
Observation type. One of: "span" (default), "generation", "agent", "tool", "chain", "retriever", "embedding", "evaluator", "guardrail"
capture_input
bool (optional)
Whether to capture function arguments as input. Default: True
capture_output
bool (optional)
Whether to capture return value as output. Default: True
transform_to_string
Callable (optional)
Custom function to convert generator/iterator outputs to string
To disable automatic capture of large inputs/outputs:
You can also disable automatic I/O capture globally via environment variable:
Special keyword arguments: When calling a decorated function, you can pass special keyword arguments to control tracing behavior. These are intercepted by the decorator and not passed to the actual function:
interactiveai_trace_id
Explicitly set the trace ID for this function call
interactiveai_parent_observation_id
Explicitly set the parent span ID
interactiveai_public_key
Route to a specific InteractiveAI project (when multiple clients exist)
Updating a Trace
After creating a trace, you can update it with additional attributes like user ID, session ID, tags, and metadata.
Use update_trace() on the span object or update_current_trace() on the client:
Inside a decorated function, use update_current_trace() to set trace-level attributes:
Alternatively, use propagate_attributes to set attributes that automatically propagate to all child spans:
propagate_attributes parameters:
user_id
User identifier (US-ASCII, ≤200 chars)
session_id
Session identifier (US-ASCII, ≤200 chars)
trace_name
Name to assign to the trace
metadata
Key-value metadata dict
tags
List of tags to categorize observations
version
Version identifier for your application
as_baggage
If True, propagates via OpenTelemetry baggage for cross-service tracing. Default: False
Only spans created after entering the propagate_attributes context will inherit the attributes. Pre-existing spans are not retroactively updated.
Deterministic IDs
By default, the platform auto-generates trace IDs (32-character lowercase hexadecimal strings). However, for cases where you need consistent identification across systems, you can generate deterministic IDs using seed values.
Use cases for deterministic IDs:
Deep linking to traces from your own UI or logs
Adding scores and evaluations by referencing trace IDs
Fetching specific traces programmatically via the SDK
Connecting traces to external identifiers (support tickets, user requests, message IDs)
Pass the deterministic trace ID using the special interactiveai_trace_id keyword argument when calling the decorated function:
The interactiveai_trace_id argument is intercepted by the decorator and not passed to the actual function.
Trace IDs must be unique within a project.
Distributed Tracing
Distributed tracing enables you to correlate traces across microservices by propagating shared Trace IDs through OpenTelemetry context. This becomes essential when your LLM application spans multiple services; for example, a Python service handling user requests that calls a Java service for database queries, which then routes back to Python for agent processing.
This unified visibility lets you:
Debug issues that span service boundaries without manually correlating timestamps.
Measure true end-to-end latency, not just per-service latency.
Understand how a single user request propagates through your entire architecture.
Identify bottlenecks that only emerge when services interact.
Pass the existing trace and parent span IDs using the special keyword arguments when calling the decorated function:
The interactiveai_trace_id and interactiveai_parent_observation_id arguments are intercepted by the decorator and not passed to the actual function.
Trace Utilities
Getting Trace URLs
Generate a direct link to a trace in the InteractiveAI UI, useful for logging, notifications, or debugging tools:
Getting Current Context
Retrieve the trace ID or observation ID of the currently active span. This works inside both context managers and @observe decorated functions:
Properties of a Trace
Id
Unique identifier. Auto-generated or set manually for deterministic linking
Trace Name
Human-readable label
Timestamp
Creation time of the trace
Input/Output
JSON payloads capturing the request and response
Observation Levels
Summary count of all the nested activities within that trace
Latency
End-to-end execution time of the trace
Tokens
Total token count (input + output) across all observations
Model Cost
Accumulated cost of all model calls within the trace
Environment
Separate data from different deployment contexts like production, staging, or development
Tags
Array of strings for better categorization and filtering (e.g., ["prod", "rag", "v2"])
Metadata
Free-form JSON for extra context. (e.g., run_name, dataset_item_id)
Scores
Evaluation metrics attached to the trace (e.g., quality ratings, correctness checks, custom evaluations)
Session
Groups multiple traces into a single conversation or interaction
User
Associates the trace with a specific end user
Observations
The total number of observations in the trace
Level
Severity or log level (e.g., DEBUG, DEFAULT, WARNING, ERROR)
Version
Logical version of your workflow
Release
Associates the trace with a specific deployment
Input/Output Cost
Accumulated cost of processing input/output tokens across all observations in the trace
Input/Output Tokens
Total input/output token count across all observations in the trace
Total Tokens
Combined input and output token count across all observations in the trace
Last updated
Was this helpful?

