# LangChain & LangGraph

LangChain is an open-source framework that helps developers build applications powered by large language models (LLMs) by providing tools to connect models with external data, APIs, and logic whereas LangGraph, is a framework built on top of LangChain for building complex, stateful, multi-agent applications. It includes built-in persistence to save and resume state, enabling error recovery and human-in-the-loop workflows.

InteractiveAI integrates with LangChain using the built-in CallbackHandler. The SDK automatically captures detailed traces of your LangChain and LangGraph executions, including LLM calls, tools, and agent steps.

### Prerequisites

* InteractiveAI account with API credentials
* OpenAI API key (or other supported LLM provider)
* Python 3.11 or higher

***

### Installation

```bash
pip install interactiveai langchain langchain-openai langgraph
```

***

### Configuration

Set your API credentials as environment variables:

```python
import os

# InteractiveAI credentials
os.environ["INTERACTIVEAI_PUBLIC_KEY"] = "pk-..."
os.environ["INTERACTIVEAI_SECRET_KEY"] = "sk-..."

# Model provider credentials
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
```

Initialize the client and confirm connectivity:

```python
import os
from dotenv import load_dotenv
from interactiveai import Interactive
from interactiveai.langchain import CallbackHandler

load_dotenv()

interactiveai = Interactive(
    public_key=os.getenv("INTERACTIVEAI_PUBLIC_KEY"),
    secret_key=os.getenv("INTERACTIVEAI_SECRET_KEY"),
)

handler = CallbackHandler()

if interactiveai.auth_check():
    print("Connection established")
else:
    print("Authentication failed - verify credentials")
```

***

### LangChain Example

Pass the `handler` to any LangChain invocation via the `callbacks` config:

```python
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o-mini")

response = llm.invoke(
    [HumanMessage(content="What is the capital of France?")],
    config={"callbacks": [handler]}
)

print(response.content)
interactiveai.flush()
```

[Trace](https://dev.interactive.ai/project/cmk5bupx70066yf07x6khlood/traces?peek=62a6b3a45d1546d269613f69dcae6289\&timestamp=2026-01-13T14%3A17%3A57.754Z)

### LangGraph Example

Build a simple chatbot using LangGraph's StateGraph:

```python
from typing import Annotated
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list, add_messages]


llm = ChatOpenAI(model="gpt-4o-mini")


def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}


graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")

graph = graph_builder.compile()

for chunk in graph.stream(
    {"messages": [HumanMessage(content="Explain observability in one sentence.")]},
    config={"callbacks": [handler]}
):
    print(chunk)

interactiveai.flush()
```

[Trace](https://dev.interactive.ai/project/cmk5bupx70066yf07x6khlood/traces?filter=\&peek=98cca5667d26e70f13de2dfb5c6ff51f\&timestamp=2026-01-13T14%3A16%3A59.033Z)

***

### Enriching Traces with Context

Combine the CallbackHandler with InteractiveAI's span management for additional metadata:

<pre class="language-python"><code class="lang-python">import os
from dotenv import load_dotenv
from typing import Annotated
from typing_extensions import TypedDict
from interactiveai import Interactive
from interactiveai.langchain import CallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages

load_dotenv()

interactiveai = Interactive(
    public_key=os.getenv("INTERACTIVEAI_PUBLIC_KEY"),
    secret_key=os.getenv("INTERACTIVEAI_SECRET_KEY"),
)

handler = CallbackHandler()


class State(TypedDict):
    messages: Annotated[list, add_messages]


llm = ChatOpenAI(model="gpt-4o-mini")


def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}


graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")

graph = graph_builder.compile()

with interactiveai.start_as_current_span(name="langgraph-chatbot") as span:
    
<strong>    interactiveai.update_current_trace(
</strong><strong>        user_id="user_123",
</strong><strong>        session_id="session_abc",
</strong><strong>        tags=["langgraph", "chatbot"],
</strong><strong>        metadata={"environment": "production"}
</strong><strong>    )
</strong>    
    query = "What are the benefits of multi-agent systems?"
    
    result = None
    for chunk in graph.stream(
        {"messages": [HumanMessage(content=query)]},
        config={"callbacks": [handler]}
    ):
        result = chunk
        print(chunk)
    
    span.update(
        input={"query": query},
        output={"response": str(result)}
    )

interactiveai.flush()
</code></pre>

[Trace](https://dev.interactive.ai/project/cmk5bupx70066yf07x6khlood/traces?filter=\&peek=77ca89504adee689d241627e027f387a\&timestamp=2026-01-13T14%3A12%3A11.450Z)

***

### Trace Visibility

After execution, the InteractiveAI dashboard displays:

* Complete agent execution flows
* Individual LLM calls with token counts
* Tool invocations and results
* Input/output at each step
* Cost and latency metrics
