MLflow Callback Handler in LangChain - Track Your LLM Experiments
Posted: Nov 8, 2024.
The MLflow callback handler in LangChain allows you to track and monitor your LLM experiments by logging metrics, artifacts and traces to MLflow's tracking server. In this guide, we'll explore how to use this powerful callback to get insights into your LangChain applications.
What is MLflowCallbackHandler?
MLflowCallbackHandler is a callback handler in LangChain that integrates with MLflow to track experiments and log information about LLM runs. It captures important metrics, artifacts and execution traces when running chains, agents, chat models and other LangChain components.
The handler can log:
- Metrics like token usage and number of LLM calls
- Chain/Agent execution traces as JSON
- Input/output data as Pandas DataFrames
- Chain/Agent definitions as MLflow models
- Text complexity metrics (when spaCy is installed)
Reference
Here are the key methods and attributes of MLflowCallbackHandler:
Method/Attribute | Description |
---|---|
__init__() | Initialize the handler with experiment name, tags, tracking URI etc. |
on_llm_start() | Called when an LLM starts running |
on_llm_end() | Called when an LLM finishes running |
on_chain_start() | Called when a chain starts running |
on_chain_end() | Called when a chain ends running |
on_tool_start() | Called when a tool starts running |
on_tool_end() | Called when a tool ends running |
flush_tracker() | Flush tracked data to MLflow server |
How to Use MLflowCallbackHandler
Basic Setup
First, import and initialize the callback handler:
Using with Chains
You can use the callback handler when invoking chains to track their execution:
Using with Agents
The handler can also track agent executions:
Configuring What to Track
You can configure which components to track using the ignore flags:
Accessing the MLflow UI
After running your code with the callback handler, you can view the tracked data in the MLflow UI:
- Start the MLflow UI server:
- Open http://localhost:5000 in your browser to see:
- Metrics like token usage
- Execution traces
- Input/output artifacts
- Model information
Remember to always call flush_tracker()
at the end of your runs to ensure all data is written to MLflow.
This callback handler provides valuable insights into your LangChain applications and helps with experiment tracking, debugging and optimization.
An alternative to LangSmith
Open-source LangChain monitoring, prompt management, and magic. Get started in 2 minutes.
LangChain DocsJoin 10,000+ subscribers
Every 2 weeks, latest model releases and industry news.
An alternative to LangSmith
Open-source LangChain monitoring, prompt management, and magic. Get started in 2 minutes.