# Scoring

## Overview

Attach numeric, boolean, or categorical scores to traces and spans.

`create_score` posts a score to any trace or observation by ID. `score_current_span` and `score_current_trace` resolve the active OTel context automatically.

***

## `create_score` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L1934)

Create a score for a specific trace or observation.

This method creates a score for evaluating a InteractiveAI trace or observation. Scores can be used to track quality metrics, user feedback, or automated evaluations.

```python
create_score(
    *,
    name: str,
    value: Union[float, str],
    session_id: str | None = None,
    dataset_run_id: str | None = None,
    trace_id: str | None = None,
    observation_id: str | None = None,
    score_id: str | None = None,
    data_type: Literal['NUMERIC', 'CATEGORICAL', 'BOOLEAN'] | None = None,
    comment: str | None = None,
    config_id: str | None = None,
    metadata: Any | None = None,
    timestamp: datetime | None = None,
) -> None
```

**Parameters**

* `name` — Name of the score (e.g., "relevance", "accuracy")
* `value` — Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
* `session_id` — ID of the InteractiveAI session to associate the score with
* `dataset_run_id` — ID of the InteractiveAI dataset run to associate the score with
* `trace_id` — ID of the InteractiveAI trace to associate the score with
* `observation_id` — Optional ID of the specific observation to score. Trace ID must be provided too.
* `score_id` — Optional custom ID for the score (auto-generated if not provided)
* `data_type` — Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
* `comment` — Optional comment or explanation for the score
* `config_id` — Optional ID of a score config defined in InteractiveAI
* `metadata` — Optional metadata to be attached to the score
* `timestamp` — Optional timestamp for the score (defaults to current UTC time)

**Example**

```python
# Create a numeric score for accuracy
interactiveai.create_score(
    name="accuracy",
    value=0.92,
    trace_id="abcdef1234567890abcdef1234567890",
    data_type="NUMERIC",
    comment="High accuracy with minor irrelevant details"
)

# Create a categorical score for sentiment
interactiveai.create_score(
    name="sentiment",
    value="positive",
    trace_id="abcdef1234567890abcdef1234567890",
    observation_id="abcdef1234567890",
    data_type="CATEGORICAL"
)
```

***

## `score_current_span` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L2029)

Create a score for the current active span.

This method scores the currently active span in the context. It's a convenient way to score the current operation without needing to know its trace and span IDs.

```python
score_current_span(
    *,
    name: str,
    value: Union[float, str],
    score_id: str | None = None,
    data_type: Literal['NUMERIC', 'CATEGORICAL', 'BOOLEAN'] | None = None,
    comment: str | None = None,
    config_id: str | None = None,
) -> None
```

**Parameters**

* `name` — Name of the score (e.g., "relevance", "accuracy")
* `value` — Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
* `score_id` — Optional custom ID for the score (auto-generated if not provided)
* `data_type` — Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
* `comment` — Optional comment or explanation for the score
* `config_id` — Optional ID of a score config defined in InteractiveAI

**Example**

```python
with interactiveai.start_as_current_generation(name="answer-query") as generation:
    # Generate answer
    response = generate_answer(...)
    generation.update(output=response)

    # Score the generation
    interactiveai.score_current_span(
        name="relevance",
        value=0.85,
        data_type="NUMERIC",
        comment="Mostly relevant but contains some tangential information"
    )
```

***

## `score_current_trace` [(source)](https://github.com/interactive-ai/interactiveai-python-sdk/blob/main/interactiveai/_client/client.py#L2101)

Create a score for the current trace.

This method scores the trace of the currently active span. Unlike score\_current\_span, this method associates the score with the entire trace rather than a specific span. It's useful for scoring overall performance or quality of the entire operation.

```python
score_current_trace(
    *,
    name: str,
    value: Union[float, str],
    score_id: str | None = None,
    data_type: Literal['NUMERIC', 'CATEGORICAL', 'BOOLEAN'] | None = None,
    comment: str | None = None,
    config_id: str | None = None,
) -> None
```

**Parameters**

* `name` — Name of the score (e.g., "user\_satisfaction", "overall\_quality")
* `value` — Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
* `score_id` — Optional custom ID for the score (auto-generated if not provided)
* `data_type` — Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
* `comment` — Optional comment or explanation for the score
* `config_id` — Optional ID of a score config defined in InteractiveAI

**Example**

```python
with interactiveai.start_as_current_span(name="process-user-request") as span:
    # Process request
    result = process_complete_request()
    span.update(output=result)

    # Score the overall trace
    interactiveai.score_current_trace(
        name="overall_quality",
        value=0.95,
        data_type="NUMERIC",
        comment="High quality end-to-end response"
    )
```
