# InteractiveAI SDK

The InteractiveAI SDK provides instrumentation for Python applications, enabling trace capture, scoring, dataset management, and asynchronous event processing with minimal performance overhead.

For languages beyond Python, use the OpenTelemetry endpoint to send traces from any runtime.

***

### Installation

```python
pip install interactiveai 
```

***

### Configuration

Set your API credentials as environment variables.&#x20;

```python
INTERACTIVEAI_PUBLIC_KEY="pk-..."
INTERACTIVEAI_SECRET_KEY="sk-..."
```

{% hint style="info" %}
Obtain your project keys from **Settings > API Keys** in the InteractiveAI Platform.
{% endhint %}

***

### Client Initialization

Initializate the InteractiveAI client and verify connectivity.

```python
from interactiveai import Interactive

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)

if interactiveai.auth_check():
    print("Connection established")
else:
    print("Authentication failed - verify credentials")
```

***

### Basic Instrumentation

Create spans to capture operations within your application. Spans can be nested to represent hierarchical workflows:

```python
from interactiveai import Interactive

interactiveai = Interactive()

# Create a span using context manager
with interactiveai.start_as_current_span(name="handle-request") as span:
    # Application logic here
    span.update(output="Request handled")

    # Nested generation for LLM calls
    with interactiveai.start_as_current_observation(name="llm-call", as_type="generation", model="gpt-4") as generation:
        # LLM invocation here
        generation.update(output="Generated response")
```

***

### Flushing Events

The SDK processes events asynchronously to minimize latency impact. For short-lived applications such as scripts, serverless functions, or CLI tools, explicitly flush pending events before process termination.

```python
from interactiveai import Interactive

interactiveai = Interactive()

# ... application logic ...

# Ensure all events are sent before exit
interactiveai.flush()
```

***

### Scoring Traces

Attach evaluation scores to traces for quality tracking and analysis. Scores can represent any metric relevant to your application: accuracy, latency, user satisfaction, or custom evaluation criteria.

```python
from interactiveai import Interactive
from openai import OpenAI

interactiveai = Interactive()

openai_client = OpenAI(
    base_url="https://dev.interactive.ai/api/v1",
    api_key=os.environ.get("LLMROUTER_API_KEY"),
)

input_data = "In transformer attention, what is the difference between causal attention and bidirectional attention?"

with interactiveai.start_as_current_span(name="evaluated-operation") as span:
    response = openai_client.chat.completions.create(
        model="openai/gpt-4o",
        messages=[
            {"role": "user", "content": input_data}
        ],
    )
    result = response.choices[0].message.content
    print(result)
    
    # Score the trace
    interactiveai.score_current_trace(
        name="accuracy",
        value=0.95,
        data_type="NUMERIC",
        comment="High confidence response"
    )
    
    # Multiple scores per trace are supported
    interactiveai.score_current_trace(
        name="relevance",
        value=1,
        data_type="NUMERIC",
        comment="Response directly addresses query"
    )

interactiveai.flush()
```

***

### Available Methods

| Description                           | Python                          |
| ------------------------------------- | ------------------------------- |
| Create a span for generic operations  | `start_as_current_span()`       |
| Create a span for LLM calls           | `start_as_current_generation()` |
| Modify the active span                | `update_current_span()`         |
| Add metadata to the current trace     | `update_current_trace()`        |
| Attach evaluation scores to the trace | `score_current_trace()`         |
| Record standalone scoring events      | `create_score()`                |
| Initialize a new dataset              | `create_dataset()`              |
| Retrieve an existing dataset          | `get_dataset()`                 |
| Add items to a dataset                | `create_dataset_item()`         |
| Send pending events immediately       | `flush()`                       |
| Flush events and close connections    | `shutdown()`                    |
| Verify API credential validity        | `auth_check()`                  |
