We provide callback handler that can be used to track LangChain calls, chains and agents.


First, install the relevant lunary package:

pip install lunary

Then, set the LUNARY_PUBLIC_KEY environment variable to your app tracking id.


If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:

from lunary import LunaryCallbackHandler
handler = LunaryCallbackHandler(app_id="PUBLIC KEY")

Usage with LLM calls

You can use the callback handler with any LLM or Chat class from LangChain.

from langchain_openai import ChatOpenAI
from lunary import LunaryCallbackHandler
handler = LunaryCallbackHandler()
chat = ChatOpenAI(
chat.invoke("Say test")

Usage with chains (LCEL)

You can also use the callback handler with LCEL, LangChain Expression Language.

from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnablePassthrough, RunnableConfig
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
import lunary
handler = lunary.LunaryCallbackHandler()
config = RunnableConfig({"callbacks": [handler]})
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
output_parser = StrOutputParser()
model = ChatOpenAI(model="gpt-4")
chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| output_parser
chain.invoke("ice cream", config=config) # You need to pass the config each time you call `.invoke()`

Usage with agents

The callback handler works seamlessly with LangChain agents and chains.

For agents, it is recommended to pass a name in the metadatas to track them in the dashboard.


from langchain.agents import AgentExecutor, create_openai_tools_agent
from import TavilySearchResults
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from lunary import LunaryCallbackHandler
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
tools = [TavilySearchResults(max_results=1)]
handler = LunaryCallbackHandler()
config = RunnableConfig({"callbacks": [handler]})
llm = ChatOpenAI(model="gpt-4")
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "what is LangChain?"}, config)

Usage with custom agents

If you're partially using LangChain, you can use the callback handler combined with the lunary module to track custom agents:

from langchain.schema.messages import HumanMessage, SystemMessage
from langchain.chat_models import ChatOpenAI
import lunary
chat = ChatOpenAI()
def TranslatorAgent(query):
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization?"),
return chat.invoke(messages)
res = TranslatorAgent("Good morning")

Usage with LangServe

You can use the callback handler to track all calls to your LangServe server.


from fastapi import FastAPI
from langchain.chat_models import ChatOpenAI
from langchain.schema.runnable import (
from langserve import add_routes
from lunary import LunaryCallbackHandler
handler = LunaryCallbackHandler()
app = FastAPI(
title="LangChain Server",
description="Spin up a simple api server using Langchain's Runnable interfaces",
model = ChatOpenAI(callbacks=[handler]).configurable_fields(
description=("Custom metadata"),
add_routes(app, model, path="/openai", config_keys=["metadata"])
if __name__ == "__main__":
import uvicorn, host="localhost", port=8000)


from langchain.schema import SystemMessage, HumanMessage
from langserve import RemoteRunnable
openai = RemoteRunnable("http://localhost:8000/openai/")
prompt = [
SystemMessage(content="Act like either a cat or a parrot."),
res = openai.invoke("Hello", config={"metadata": {
"user_id": "123", "tags": ["user1"]}})

Questions? We're here to help.