Documentation
Getting Started
Integrations
JavaScript
Python
LangChain
API
Others
Features
Observability
Prompts
Threads
Evaluations
Radars
Users
Feedback
Tags
More
Security
Concepts
Self-hosting
LangChain
We provide callback handler that can be used to track LangChain calls, chains and agents.
Setup
First, install the relevant lunary
and langchain
packages:
pip install lunarypip install langchain
Then, set the LUNARY_PUBLIC_KEY
environment variable to your app tracking id.
LUNARY_PUBLIC_KEY="PUBLIC KEY"
If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:
from lunary import LunaryCallbackHandlerhandler = LunaryCallbackHandler(app_id="PUBLIC KEY")
Usage with LLM calls
You can use the callback handler with any LLM or Chat class from LangChain.
from langchain_openai import ChatOpenAIfrom lunary import LunaryCallbackHandlerhandler = LunaryCallbackHandler()chat = ChatOpenAI(callbacks=[handler],)chat.invoke("Say test")
Usage with chains (LCEL)
You can also use the callback handler with LCEL, LangChain Expression Language.
from langchain_openai import ChatOpenAIfrom langchain_core.runnables import RunnablePassthrough, RunnableConfigfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateimport lunaryhandler = lunary.LunaryCallbackHandler()config = RunnableConfig({"callbacks": [handler]})prompt = ChatPromptTemplate.from_template("Tell me a short joke about {topic}")output_parser = StrOutputParser()model = ChatOpenAI(model="gpt-4")chain = ({"topic": RunnablePassthrough()}| prompt| model| output_parser)chain.invoke("ice cream", config=config) # You need to pass the config each time you call `.invoke()`
Usage with agents
The callback handler works seamlessly with LangChain agents and chains.
For agents, it is recommended to pass a name in the metadatas to track them in the dashboard.
Example:
from langchain.agents import AgentExecutor, create_openai_tools_agentfrom langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnableConfigfrom langchain_openai import ChatOpenAIfrom lunary import LunaryCallbackHandlerprompt = ChatPromptTemplate.from_messages([("system", "You are a helpful assistant"),MessagesPlaceholder("chat_history", optional=True),("human", "{input}"),MessagesPlaceholder("agent_scratchpad"),])tools = [TavilySearchResults(max_results=1)]handler = LunaryCallbackHandler()config = RunnableConfig({"callbacks": [handler]})llm = ChatOpenAI(model="gpt-4")agent = create_openai_tools_agent(llm, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke({"input": "what is LangChain?"}, config)
Usage with custom agents
If you're partially using LangChain, you can use the callback handler combined with the lunary
module to track custom agents:
from langchain.schema.messages import HumanMessage, SystemMessagefrom langchain.chat_models import ChatOpenAIimport lunarychat = ChatOpenAI()@lunary.agent()def TranslatorAgent(query):messages = [SystemMessage(content="You're a helpful assistant"),HumanMessage(content="What is the purpose of model regularization?"),]return chat.invoke(messages)res = TranslatorAgent("Good morning")
Usage with LangServe
You can use the callback handler to track all calls to your LangServe server.
Server
from fastapi import FastAPIfrom langchain.chat_models import ChatOpenAIfrom langchain.schema.runnable import (ConfigurableField,)from langserve import add_routesfrom lunary import LunaryCallbackHandlerhandler = LunaryCallbackHandler()app = FastAPI(title="LangChain Server",version="1.0",description="Spin up a simple api server using Langchain's Runnable interfaces",)model = ChatOpenAI(callbacks=[handler]).configurable_fields(metadata=ConfigurableField(id="metadata",name="Metadata",description=("Custom metadata"),),)add_routes(app, model, path="/openai", config_keys=["metadata"])if __name__ == "__main__":import uvicornuvicorn.run(app, host="localhost", port=8000)
Client
from langchain.schema import SystemMessage, HumanMessagefrom langserve import RemoteRunnableopenai = RemoteRunnable("http://localhost:8000/openai/")prompt = [SystemMessage(content="Act like either a cat or a parrot."),HumanMessage(content="Hello!"),]res = openai.invoke("Hello", config={"metadata": {"user_id": "123", "tags": ["user1"]}})print(res)