LangChain's ToolMessage class: Complete Guide

Posted: Nov 17, 2024.

While working with LangChain you will most likely come across the ToolMessage class which provides a structured way to relay tool outputs back to the model.

This article explores what is ToolMessage, how it works and how to use it effectively with practical examples and advanced use cases like handling CSV data and debugging.

An Introduction to ToolMessage Class

The ToolMessage class in LangChain represents the result of a tool invocation and capturing its output for communicating it with models.

It was introduced in version 0.2.17 and it provides a structured way to manage function outputs, metadata and additional data that may not be directly sent to the model.

The functionality of the ToolMessage class includes:

  • Encoding tool results inside the content field.
  • Associating tool responses with their requests using unique identifiers (tool_call_id).
  • Supporting additional payload data and artifacts for complex scenarios.

Here’s a quick usage example:

pip install langchain
from langchain_core.messages import ToolMessage

# Simple tool result message
message = ToolMessage(content="42", tool_call_id="call_example_123")
print(message)

The ToolMessage class has several attributes. Below is a table summarizing these attributes

AttributeTypeDescriptionRequired
contentUnion[str, List[Union[str, Dict]]]The main content of the message.Yes
tool_call_idstrA unique identifier linking the tool call request with its response.Yes
artifactAnyAn optional field for artifacts (e.g., images or full outputs) not sent directly to the model.No
additional_kwargsdictReserved for additional metadata or payload data.No
response_metadatadictMetadata like response headers or token counts.No
statusLiteral['success', 'error']Indicates the success or failure of the tool invocation. Defaults to 'success'.No
idOptional[str]A unique identifier for the message.No

Why use the ToolMessage class?

ToolMessage is used to report a tool execution results back to the model.

Additionally, it enables:

  1. Clarity in Communication: It captures all necessary information about a tool’s execution in one structured object.
  2. Error Handling: With attributes like status, you can handle errors and improve robustness.
  3. Parallel Execution: The tool_call_id enables tracking of multiple tool invocations in parallel workflows.
  4. Flexibility: Support for artifacts and additional data makes it ideal for outputs like graphs or images.

How to handle artifacts with ToolMessage?

Suppose the tool generates an image or a detailed output which cannnot be sent to the model directly, Then you can use the artifact field.

output = {
    "stdout": "Graph shows strong correlation between x and y.",
    "artifacts": {"type": "image", "base64_data": "data:image/png;base64,..."}
}

message = ToolMessage(
    content=output["stdout"],
    artifact=output["artifacts"],
    tool_call_id="call_graph_6789"
)

Example Usage of ToolMessage with CSV Data

For data-heavy applications, CSV files are a common source. ToolMessage simplifies the process of extracting, summarizing and passing information to the model.

import pandas as pd
from langchain_core.messages import ToolMessage, SystemMessage, HumanMessage
from langchain.chat_models import ChatOpenAI

# Load the dataset
df = pd.read_csv("sample_data.csv")

# Summarize the data
summary = f"The dataset has {df.shape[0]} rows and {df.shape[1]} columns."

# Include additional metadata in the artifact
metadata = {
    "columns": df.columns.tolist(),
    "sample": df.head().to_dict()
}

# Create a ToolMessage
tool_message = ToolMessage(
    content=summary,
    artifact=metadata,
    tool_call_id="csv_process_001"
)

# Define the system message
system_message = SystemMessage(
    content=(
        "You are an intelligent data assistant. "
        "Use the provided summary and metadata to help analyze and answer questions about the dataset."
    )
)

# Define the human message
human_message = HumanMessage(
    content=(
        "Based on the dataset summary and metadata, can you provide insights on the distribution of values in each column?"
    )
)

chat_model = ChatOpenAI(temperature=0.7)
response = chat_model([system_message, human_message, tool_message])
print(response.content)

You can generate insights derived from datasets while retaining raw details for deeper analysis.

Using Lunary for monitoring tool calls

Lunary is an open-source platform that can help you see how tool calls are invoked, what led to their invocations and how they impact your agent's flows. Lunary's observability features include tracing, analytics and feedback tracking to monitor and debug AI models.

Why use Lunary:

  1. LangChain Tracing: Understand how your complex LangChain agents work and why certain tools are invoked
  2. Better LangSmith: Lunary is more modern and open-source LangSmith alternative.
  3. Chatbot analytics View how people interact with your chatbot, capture their feedback and replay conversations.
  4. Manage Prompts: Manage your LangChain prompts in a clean interface and collaborate with your team on it.

See how to integrate Lunary with Langchain.

Building an AI chatbot?

Open-source GenAI monitoring, prompt management, and magic.

Learn More

Join 10,000+ subscribers

Every 2 weeks, latest model releases and industry news.

Building an AI chatbot?

Open-source GenAI monitoring, prompt management, and magic.

Learn More