# LlamaIndex Workflows

LlamaIndex Workflows provides an event-driven architecture for constructing AI agents. The framework uses the `@step` decorator to define processing stages, where each step handles specific event types and can emit new events. This pattern supports orchestrating multi-step processes like agent collaboration, RAG pipelines, and structured data extraction.

This guide covers capturing telemetry from LlamaIndex Workflows applications using InteractiveAI.

### Prerequisites

* InteractiveAI account with API credentials
* LLM provider credentials (OpenAI, Ollama, or other supported provider)

***

### Installation

```bash
pip install interactiveai openai llama-index-workflows llama-index-core llama-index-llms-openai openinference-instrumentation-llama_index llama-index-instrumentation
```

***

### Configuration

Set your API credentials as environment variables:

```python
import os

# InteractiveAI credentials
# Obtain keys from Settings > API Keys in the dashboard
os.environ["INTERACTIVEAI_PUBLIC_KEY"] = "pk-..."
os.environ["INTERACTIVEAI_SECRET_KEY"] = "sk-..."

# Model provider credentials
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
```

Initialize the client and confirm connectivity:

```python
from interactiveai import Interactive

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)

if interactiveai.auth_check():
    print("Connection established")
else:
    print("Authentication failed - verify credentials")
```

***

### Enabling Trace Capture

LlamaIndex Workflows uses the same OpenInference instrumentor as the core LlamaIndex library:

```python
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

LlamaIndexInstrumentor().instrument()
```

Once activated, workflow steps, LLM calls, and event processing generate spans that route to InteractiveAI.

***

### Running a Workflow Application

Here's a working example with a simple two-step workflow:

```python
from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
from typing import Annotated
from workflows import Workflow, step
from workflows.events import StartEvent, StopEvent
from workflows.resource import Resource


def get_llm(**kwargs):
    return OpenAI(model="gpt-4.1-mini")


class SummaryWorkflow(Workflow):
    @step
    async def process_query(
        self, ev: StartEvent, llm: Annotated[OpenAI, Resource(get_llm)]
    ) -> StopEvent:
        msg = ChatMessage(role="user", content=ev.get("input"))
        response = await llm.achat([msg])
        return StopEvent(result=response.message.content)


workflow = SummaryWorkflow()

response = await workflow.run(input="Summarize the key benefits of event-driven architectures.")
print(response)
```

***

### Enriching Traces with Context

Combine OpenInference instrumentation with the InteractiveAI SDK to attach identifiers and metadata:

```python
from interactiveai import Interactive
from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
from typing import Annotated
from workflows import Workflow, step
from workflows.events import StartEvent, StopEvent
from workflows.resource import Resource

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)


def get_llm(**kwargs):
    return OpenAI(model="gpt-4.1-mini")


class AnalysisWorkflow(Workflow):
    @step
    async def analyze(
        self, ev: StartEvent, llm: Annotated[OpenAI, Resource(get_llm)]
    ) -> StopEvent:
        msg = ChatMessage(role="user", content=ev.get("input"))
        response = await llm.achat([msg])
        return StopEvent(result=response.message.content)


async def run_with_context():
    with interactiveai.start_as_current_span(name="workflow-analysis-task") as span:
        
        interactiveai.update_current_trace(
            user_id="user_123",
            session_id="session_abc",
            tags=["llamaindex", "workflows"],
            metadata={"pipeline": "analysis", "environment": "production"}
        )
        
        workflow = AnalysisWorkflow()
        
        query = "What are the main considerations when designing a RAG pipeline?"
        response = await workflow.run(input=query)
        
        interactiveai.update_current_trace(
            input=query,
            output=str(response)
        )
        
        return response

    interactiveai.flush()


result = await run_with_context()
print(result)
```

***

### Trace Visibility

The InteractiveAI dashboard displays:

* Workflow execution with step-by-step breakdown
* Event emissions and processing chains
* LLM calls with prompts and completions
* Token consumption and latency per step

To add additional trace attributes or use LlamaIndex Workflows with other InteractiveAI features, refer to [this guide](https://docs.interactive.ai/integrations/ai-frameworks/llamaindex).
