AIMessage class in LangChain

Posted: Nov 15, 2024.

When building AI applications using LangChain or similar frameworks, the behavior of your Language Model (LLM) is the core determinant of your app's success. Thus, Continuous observation and tuning of AI responses is very important.

AIMessage class is typically created automatically by Langchain during interactions, but it provides flexibility for developers to manage AI responses with additional metadata.

What is the AIMessage Class in Langchain?

The AIMessage class represents messages generated by an AI model. It belongs to the langchain_core.messages.ai module and extends the BaseMessage class.

AIMessage is a structured format for AI responses including content and optional metadata for making it easier to manage workflows.

How Does AIMessage Fit into Langchain Workflows?

Langchain workflows involve user prompts and AI responses. BaseMessage class contains multiple serialization format for different message types. HumanMessage represents user input while AIMessage represents the AI's reply.

  1. Content Handling: Manages the content produced by the model, ranging from simple text to complex data.

  2. Metadata Integration: Can include additional metadata, such as token counts or tool calls, for evaluation and debugging.

  3. Serialization: Supports serialization for easy storage or transfer of AI responses.

Here’s a simple example to help you get started with AIMessage. Begin by installing Langchain package.

pip install langchain

Once installed, you can create a basic AIMessage like this:

from langchain_core.messages import AIMessage

# Creating a simple AIMessage
message = AIMessage(content="This is an AI response.")
print(message.content)

The content parameter holds the response text.

You can add extra information to a response, such as tools used or confidence levels, using additional_kwargs.

message = AIMessage(
    content="This is an AI response with metadata.",
    additional_kwargs={
        "model": "gpt-3.5-turbo",
        "token_count": len(response.content.split()),
        "confidence_score": 0.95
    }
)

print(f"AI Response: {ai_message.content}")
print(f"Metadata: {ai_message.additional_kwargs}")

The Response models of the Langchain gives the response serialized as AIMessage. The ChatOpenAI class from langchain in the below example gives serialized response.

from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage

system_message = SystemMessage(content="You are an expert on machine learning.")
human_message = HumanMessage(content="What is the difference between supervised and unsupervised learning?")

chat_model = ChatOpenAI(temperature=0.7)
response = chat_model([system_message, human_message]) # response is serialized as AIMessage
print(response.content)

Here are some more attributes of AIMessage.

ParameterDescription
contentThe message content (string or list).
additional_kwargsExtra metadata, such as payload information.
idOptional unique identifier for the message.
tool_callsList of tool calls associated with the message.
response_metadataMetadata like token count or response headers.
typeAlways set to 'ai' to indicate an AI-generated message.

Simplifying Observability with Lunary

Adding observability into your AI applications is important for optimal performance, reliability and user satisfaction.

Lunary offers an observability suite for AI developers by providing analytics, logging, and tracing capabilities that allow you to monitor and analyze your Language Model (LLM) interactions.

Integrating Lunary into your LangChain application is straightforward. Here are the steps

pip install lunary langchain

After creating an account on Lunary and setting up an app you can obtain your app's tracking ID.

export LUNARY_PUBLIC_KEY="your_public_key"

In your Python code, import the necessary modules and set up the LunaryCallbackHandler

from langchain_openai import ChatOpenAI
from lunary import LunaryCallbackHandler

# Initialize the Lunary callback handler
handler = LunaryCallbackHandler()

# Set up your chat model with the Lunary callback
chat = ChatOpenAI(
    model="gpt-3.5-turbo",
    temperature=0.7,
    callbacks=[handler],
)

# response will be serialized as AIMessage
response = chat.invoke("Hello, how can I assist you today?")
print(response.content)

This setup ensures that all interactions with the ChatOpenAI model are monitored and logged by Lunary.

Building an AI chatbot?

Open-source GenAI monitoring, prompt management, and magic.

Learn More

Join 10,000+ subscribers

Every 2 weeks, latest model releases and industry news.

Building an AI chatbot?

Open-source GenAI monitoring, prompt management, and magic.

Learn More