LangChain's ToolMessage class: Complete Guide
Posted: Nov 17, 2024.
While working with LangChain you will most likely come across the ToolMessage
class which provides a structured way to relay tool outputs back to the model.
This article explores what is ToolMessage
, how it works and how to use it effectively with practical examples and advanced use cases like handling CSV data and debugging.
An Introduction to ToolMessage Class
The ToolMessage
class in LangChain represents the result of a tool invocation and capturing its output for communicating it with models.
It was introduced in version 0.2.17 and it provides a structured way to manage function outputs, metadata and additional data that may not be directly sent to the model.
The functionality of the ToolMessage
class includes:
- Encoding tool results inside the content field.
- Associating tool responses with their requests using unique identifiers (tool_call_id).
- Supporting additional payload data and artifacts for complex scenarios.
Here’s a quick usage example:
The ToolMessage class has several attributes. Below is a table summarizing these attributes
Attribute | Type | Description | Required |
---|---|---|---|
content | Union[str, List[Union[str, Dict]]] | The main content of the message. | Yes |
tool_call_id | str | A unique identifier linking the tool call request with its response. | Yes |
artifact | Any | An optional field for artifacts (e.g., images or full outputs) not sent directly to the model. | No |
additional_kwargs | dict | Reserved for additional metadata or payload data. | No |
response_metadata | dict | Metadata like response headers or token counts. | No |
status | Literal['success', 'error'] | Indicates the success or failure of the tool invocation. Defaults to 'success' . | No |
id | Optional[str] | A unique identifier for the message. | No |
Why use the ToolMessage class?
ToolMessage is used to report a tool execution results back to the model.
Additionally, it enables:
- Clarity in Communication: It captures all necessary information about a tool’s execution in one structured object.
- Error Handling: With attributes like status, you can handle errors and improve robustness.
- Parallel Execution: The
tool_call_id
enables tracking of multiple tool invocations in parallel workflows. - Flexibility: Support for artifacts and additional data makes it ideal for outputs like graphs or images.
How to handle artifacts with ToolMessage?
Suppose the tool generates an image or a detailed output which cannnot be sent to the model directly, Then you can use the artifact field.
Example Usage of ToolMessage with CSV Data
For data-heavy applications, CSV files are a common source.
ToolMessage
simplifies the process of extracting, summarizing and passing information to the model.
You can generate insights derived from datasets while retaining raw details for deeper analysis.
Using Lunary for monitoring tool calls
Lunary is an open-source platform that can help you see how tool calls are invoked, what led to their invocations and how they impact your agent's flows. Lunary's observability features include tracing, analytics and feedback tracking to monitor and debug AI models.
Why use Lunary:
- LangChain Tracing: Understand how your complex LangChain agents work and why certain tools are invoked
- Better LangSmith: Lunary is more modern and open-source LangSmith alternative.
- Chatbot analytics View how people interact with your chatbot, capture their feedback and replay conversations.
- Manage Prompts: Manage your LangChain prompts in a clean interface and collaborate with your team on it.
See how to integrate Lunary with Langchain.
Join 10,000+ subscribers
Every 2 weeks, latest model releases and industry news.
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.