# OpenAI Agents

The OpenAI Agents SDK is a lightweight, open-source framework for building AI agents and coordinating multi-agent workflows. The SDK provides components including tools, handoffs, and guardrails for configuring language models with custom instructions. Its Python-first architecture supports dynamic instructions and function tools for rapid development and external system integration.

This guide covers routing telemetry from OpenAI Agents SDK applications to InteractiveAI for monitoring, debugging, and evaluating agent behavior.

### Prerequisites

* InteractiveAI account with API credentials
* OpenAI API key

***

### Installation

```bash
pip install openai-agents interactiveai nest_asyncio openinference-instrumentation-openai-agents
```

***

### Configuration

Set your API credentials as environment variables:

```python
import os

# InteractiveAI credentials
os.environ["INTERACTIVEAI_PUBLIC_KEY"] = "pk-..."
os.environ["INTERACTIVEAI_SECRET_KEY"] = "sk-..."

# OpenAI credentials
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
```

***

### Enabling Trace Capture

The OpenAI Agents SDK uses OpenInference for instrumentation. Initialize it before running any agents:

```python
import nest_asyncio
nest_asyncio.apply()

from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

OpenAIAgentsInstrumentor().instrument()
```

Initialize the client and confirm connectivity:

```python
from interactiveai import Interactive

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)

if interactiveai.auth_check():
    print("Connection established")
else:
    print("Authentication failed - verify credentials")
```

***

### Running a Basic Agent

The simplest use case: a single agent that answers questions.&#x20;

This creates one trace showing the agent's LLM call.

```python
import asyncio
from agents import Agent, Runner

async def main():
    agent = Agent(
        name="Assistant",
        instructions="You provide concise, factual answers.",
    )

    result = await Runner.run(agent, "Explain the concept of recursion in programming.")
    print(result.final_output)

loop = asyncio.get_running_loop()
await loop.create_task(main())
```

***

### Multi-Agent Handoffs

When a single agent isn't enough, you can create specialized agents and route requests between them. The router agent decides which specialist should handle each question.

```python
from agents import Agent, Runner
import asyncio

technical_agent = Agent(
    name="Technical agent",
    instructions="You handle technical and programming questions.",
)

general_agent = Agent(
    name="General agent",
    instructions="You handle general knowledge questions.",
)

router_agent = Agent(
    name="Router agent",
    instructions="Route to the appropriate agent based on the question type.",
    handoffs=[technical_agent, general_agent],
)

result = await Runner.run(router_agent, input="How do I implement a binary search algorithm?")
print(result.final_output)
```

The automatic instrumentation captures LLM calls but not agent-level input/output. To see the query and response at the root span level, use `start_as_current_span` as shown in the [Enriching Traces](#enriching-traces-with-context) section.

***

### Function Tools

Agents can call Python functions to access external data or perform actions. The trace captures each function call with its arguments and return value.

This is useful for connecting agents to databases, APIs, or any custom logic.

```python
import asyncio
from agents import Agent, Runner, function_tool

@function_tool
def get_stock_price(ticker: str) -> str:
    prices = {"AAPL": 187, "MSFT": 378, "GOOGL": 141}
    price = prices.get(ticker.upper(), 100)
    return f"{ticker.upper()} is trading at ${price}."

agent = Agent(
    name="Financial assistant",
    instructions="You help with stock market information.",
    tools=[get_stock_price],
)

async def main():
    result = await Runner.run(agent, input="What's the current price of Apple stock?")
    print(result.final_output)

loop = asyncio.get_running_loop()
await loop.create_task(main())
```

***

### Grouping Agent Runs

When your workflow involves multiple agent calls, use `trace()` to group them under a single parent trace. This makes it easier to see the complete sequence of operations.

```python
from agents import Agent, Runner, trace
import asyncio

async def main():
    agent = Agent(name="Content creator", instructions="Create engaging content.")

    with trace("Content workflow"):
        draft = await Runner.run(agent, "Write a product description for wireless headphones")
        review = await Runner.run(agent, f"Improve this description: {draft.final_output}")
        print(f"Draft: {draft.final_output}")
        print(f"Revised: {review.final_output}")

loop = asyncio.get_running_loop()
await loop.create_task(main())
```

Each call appears as a sub-span under the parent trace, showing the complete sequence of operations.

***

### Enriching Traces with Context

For production applications, you'll want to track which user triggered each trace and add metadata for filtering. Combine OpenInference instrumentation with InteractiveAI's span management to attach identifiers and metadata.

<pre class="language-python"><code class="lang-python">from interactiveai import Interactive
from agents import Agent, Runner
import asyncio

interactiveai = Interactive(
    public_key=os.environ["INTERACTIVEAI_PUBLIC_KEY"],
    secret_key=os.environ["INTERACTIVEAI_SECRET_KEY"],
)

async def run_with_context():
    with interactiveai.start_as_current_span(name="openai-agents-task") as span:
        
<strong>        interactiveai.update_current_trace(
</strong><strong>            user_id="user_openai_agents",
</strong><strong>            session_id="session_openai_agents",
</strong><strong>            tags=["openai-agents", "assistant"],
</strong><strong>            metadata={"department": "support", "environment": "production"}
</strong><strong>        )
</strong>        
        agent = Agent(
            name="Support assistant",
            instructions="You help customers with product questions.",
        )
        
        query = "How do I reset my password?"
        result = await Runner.run(agent, query)
        
        interactiveai.update_current_trace(
            input=query,
            output=result.final_output
        )

    interactiveai.flush()

loop = asyncio.get_running_loop()
await loop.create_task(run_with_context())
</code></pre>

***

### Trace Visibility

The InteractiveAI dashboard displays:

* Agent execution flow and handoffs
* LLM calls with prompts and completions
* Function tool invocations with arguments and results
* Multi-agent routing decisions
* Token consumption and latency metrics
