# AutoGen

Microsoft's AutoGen framework enables developers to create LLM-powered agents that reason, collaborate, and execute tasks through structured conversations. The open-source project simplifies assembling multi-agent systems where individual agents can work together or independently toward defined objectives.

This guide demonstrates how to capture detailed telemetry from AutoGen applications using InteractiveAI and OpenLIT instrumentation.

### Prerequisites

* InteractiveAI account with API credentials
* OpenAI API key (or other supported LLM provider)

***

### Installation

```bash
pip install interactiveai openlit autogen-agentchat "autogen-ext[openai]"
```

***

### Configuration

Create a `.env` file with your credentials:

```python
INTERACTIVEAI_PUBLIC_KEY=pk-...
INTERACTIVEAI_SECRET_KEY=sk-...
OPENAI_API_KEY=sk-proj-...
```

Initialize the client and verify connectivity:

```python
import os
from interactiveai import Interactive
from dotenv import load_dotenv

load_dotenv()

interactiveai = Interactive(
    public_key=os.getenv("INTERACTIVEAI_PUBLIC_KEY"),
    secret_key=os.getenv("INTERACTIVEAI_SECRET_KEY"),
)

if interactiveai.auth_check():
    print("Connection established")
else:
    print("Authentication failed - verify credentials")
```

***

### Enabling Trace Capture

OpenLIT handles automatic instrumentation of AutoGen operations. Connect it to the InteractiveAI tracer:

```python
import openlit

openlit.init(tracer=interactiveai._otel_tracer, disable_batch=True)
```

The `disable_batch=True` parameter forces immediate trace processing rather than queuing spans for batch export.

***

### Running an AutoGen Agent

With instrumentation active, all agent operations flow to InteractiveAI automatically:

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model = OpenAIChatCompletionClient(model="gpt-4o")
assistant = AssistantAgent("assistant", model_client=model)

response = await assistant.run(task="Explain the concept of observability in three sentences.")
print(response)

await model.close()
```

{% hint style="info" %}
This assumes InteractiveAI and OpenLIT have been initialized as shown in previous steps
{% endhint %}

The resulting trace captures the full execution path: task input, agent processing, model requests, token consumption, and final output.

***

### Enriching Traces with Context

Combine OpenLIT instrumentation with the InteractiveAI SDK to attach business context, user identifiers, and custom metadata:

<pre class="language-python"><code class="lang-python">import asyncio
import os
from dotenv import load_dotenv
from interactiveai import Interactive
import openlit
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

load_dotenv()

interactiveai = Interactive(
    public_key=os.getenv("INTERACTIVEAI_PUBLIC_KEY"),
    secret_key=os.getenv("INTERACTIVEAI_SECRET_KEY"),
)

openlit.init(tracer=interactiveai._otel_tracer, disable_batch=True)


async def main():
    with interactiveai.start_as_current_span(name="autogen-task") as span:
        
<strong>        interactiveai.update_current_trace(
</strong><strong>            user_id="user_autogen",
</strong><strong>            session_id="session_autogen",
</strong><strong>            tags=["autogen", "production"],
</strong><strong>            metadata={"environment": "production", "version": "1.0"}
</strong><strong>        )
</strong>
        model = OpenAIChatCompletionClient(model="gpt-4o")
        assistant = AssistantAgent("assistant", model_client=model)

        task = "What are the key benefits of multi-agent architectures?"
        response = await assistant.run(task=task)

        span.update(
            input={"task": task},
            output={"response": str(response)}
        )

        print(response)
        await model.close()

    interactiveai.flush()


asyncio.run(main())
</code></pre>

***

### Trace Visibility

After execution, the InteractiveAI dashboard surfaces:

* Complete conversation flow between agents
* Individual model calls with latency and token counts
* Input prompts and generated responses
* Cost calculations per request
* Metadata and tags for filtering and analysis
