When building AI applications with LangChain, your chat history usually lives as a list of LangChain message objects like SystemMessage, HumanMessage, AIMessage, and ToolMessage.
That format is convenient inside LangChain, but you may eventually need to save the conversation history, inspect it, or send it to different providers such as OpenAI-compatible APIs or Amazon Bedrock Converse.
Out Langchain conversation history will be:
- Converted to OpenAI chat-completions format.
- Converted to Amazon Bedrock Converse format.
- Saved to
chat_history_v1.json. - Loaded back into LangChain message objects.
- Converted again after loading to prove the round trip works.
- Used with an optional Bedrock
toolConfigandclient.converse()call.
Install Packages
pip install -U langchain-core langchain-aws boto3
Import Dependencies
import json
from pathlib import Path
from langchain_core.messages import (
AIMessage,
HumanMessage,
SystemMessage,
ToolCall,
ToolMessage,
convert_to_openai_messages,
messages_from_dict,
messages_to_dict,
)
from langchain_aws.chat_models.bedrock_converse import _messages_to_bedrock
Create a LangChain Chat History
This example includes regular messages and a simple tool-calling exchange.
LangChain uses:
-
ToolCallfor the assistant/model asking to call a tool -
ToolMessagefor the application returning the tool result
weather_tool_call: ToolCall = {
"name": "get_weather",
"args": {
"city": "Chicago",
"unit": "fahrenheit",
},
"id": "tool_call_001",
}
updated_messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What is RAG?"),
AIMessage(content="RAG means retrieval-augmented generation."),
HumanMessage(content="Explain it in simple terms."),
# The user asks a question that needs a tool.
HumanMessage(content="What is the weather in Chicago?"),
# The assistant decides to call a tool.
AIMessage(
content="",
tool_calls=[weather_tool_call],
),
# Your application runs the tool and adds the tool result back to the chat history.
ToolMessage(
content=json.dumps(
{
"city": "Chicago",
"unit": "fahrenheit",
"temperature": 72,
"condition": "Partly cloudy",
}
),
tool_call_id="tool_call_001",
),
# The assistant can now answer using the tool result.
AIMessage(content="The current weather in Chicago is 72°F and partly cloudy."),
]
updated_messages
Save the LangChain Message History
Use messages_to_dict() when saving LangChain messages.It preserves the information LangChain needs to recreate the correct message classes later via messages_from_dict().
file_path = Path("chat_history_v1.json")
serialized_langchain_messages = messages_to_dict(updated_messages)
with file_path.open("w", encoding="utf-8") as file:
json.dump(serialized_langchain_messages, file, indent=4, ensure_ascii=False)
print(f"Saved LangChain message history to: {file_path}")
Saved Langchain Messages
[
{
"type": "system",
"data": {
"content": "You are a helpful assistant.",
"additional_kwargs": {},
"response_metadata": {},
"type": "system",
"name": null,
"id": null
}
},
{
"type": "human",
"data": {
"content": "What is RAG?",
"additional_kwargs": {},
"response_metadata": {},
"type": "human",
"name": null,
"id": null
}
},
{
"type": "ai",
"data": {
"content": "RAG means retrieval-augmented generation.",
"additional_kwargs": {},
"response_metadata": {},
"type": "ai",
"name": null,
"id": null,
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": null
}
},
{
"type": "human",
"data": {
"content": "Explain it in simple terms.",
"additional_kwargs": {},
"response_metadata": {},
"type": "human",
"name": null,
"id": null
}
},
{
"type": "human",
"data": {
"content": "What is the weather in Chicago?",
"additional_kwargs": {},
"response_metadata": {},
"type": "human",
"name": null,
"id": null
}
},
{
"type": "ai",
"data": {
"content": "",
"additional_kwargs": {},
"response_metadata": {},
"type": "ai",
"name": null,
"id": null,
"tool_calls": [
{
"name": "get_weather",
"args": {
"city": "Chicago",
"unit": "fahrenheit"
},
"id": "tool_call_001",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": null
}
},
{
"type": "tool",
"data": {
"content": "{\"city\": \"Chicago\", \"unit\": \"fahrenheit\", \"temperature\": 72, \"condition\": \"Partly cloudy\"}",
"additional_kwargs": {},
"response_metadata": {},
"type": "tool",
"name": null,
"id": null,
"tool_call_id": "tool_call_001",
"artifact": null,
"status": "success"
}
},
{
"type": "ai",
"data": {
"content": "The current weather in Chicago is 72°F and partly cloudy.",
"additional_kwargs": {},
"response_metadata": {},
"type": "ai",
"name": null,
"id": null,
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": null
}
}
]
Load the Chat History Back
with file_path.open("r", encoding="utf-8") as file:
raw_messages = json.load(file)
restored_messages = messages_from_dict(raw_messages)
restored_messages
At this point, restored_messages contains real LangChain message objects again.
[SystemMessage(content='You are a helpful assistant.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is RAG?', additional_kwargs={}, response_metadata={}),
AIMessage(content='RAG means retrieval-augmented generation.', additional_kwargs={}, response_metadata={}, tool_calls=[], invalid_tool_calls=[]),
HumanMessage(content='Explain it in simple terms.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is the weather in Chicago?', additional_kwargs={}, response_metadata={}),
AIMessage(content='', additional_kwargs={}, response_metadata={}, tool_calls=[{'name': 'get_weather', 'args': {'city': 'Chicago', 'unit': 'fahrenheit'}, 'id': 'tool_call_001', 'type': 'tool_call'}], invalid_tool_calls=[]),
ToolMessage(content='{"city": "Chicago", "unit": "fahrenheit", "temperature": 72, "condition": "Partly cloudy"}', tool_call_id='tool_call_001'),
AIMessage(content='The current weather in Chicago is 72°F and partly cloudy.', additional_kwargs={}, response_metadata={}, tool_calls=[], invalid_tool_calls=[])]
Convert to OpenAI Message Format
OpenAI-compatible APIs expects a list of dictionaries with role and content fields.
openai_messages = convert_to_openai_messages(updated_messages)
print(json.dumps(openai_messages, indent=4, ensure_ascii=False))
The OpenAI-style output keeps the system message inside the main message list.
Note : This format is useful for OpenAI-compatible APIs, but Bedrock Converse does not keep the system prompt as a normal message in the messages array.
{
"role": "system",
"content": "You are a helpful assistant."
}
OpenAI Message Format
[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is RAG?"
},
{
"role": "assistant",
"content": "RAG means retrieval-augmented generation."
},
{
"role": "user",
"content": "Explain it in simple terms."
},
{
"role": "user",
"content": "What is the weather in Chicago?"
},
{
"role": "assistant",
"tool_calls": [
{
"type": "function",
"id": "tool_call_001",
"function": {
"name": "get_weather",
"arguments": "{\"city\": \"Chicago\", \"unit\": \"fahrenheit\"}"
}
}
],
"content": ""
},
{
"role": "tool",
"tool_call_id": "tool_call_001",
"content": "{\"city\": \"Chicago\", \"unit\": \"fahrenheit\", \"temperature\": 72, \"condition\": \"Partly cloudy\"}"
},
{
"role": "assistant",
"content": "The current weather in Chicago is 72°F and partly cloudy."
}
]
Convert to Bedrock Converse Format
Amazon Bedrock Converse separates the system prompt into a top-level system field. Regular user and assistant messages go into messages.
bedrock_messages, system = _messages_to_bedrock(updated_messages)
bedrock_payload = {
"messages": bedrock_messages,
}
if system:
bedrock_payload["system"] = system
print(json.dumps(bedrock_payload, indent=4, ensure_ascii=False))
The Bedrock Converse shape looks more like this:
{
"messages": [
{
"role": "user",
"content": [
{
"text": "What is RAG?"
}
]
}
],
"system": [
{
"text": "You are a helpful assistant."
}
]
}
Bedrock Converse Message Format
With tools, Bedrock Converse represents assistant tool requests as toolUse blocks and tool responses as toolResult blocks.
The internal _messages_to_bedrock() helper handles that conversion from LangChain tool_calls and ToolMessage objects.
{
"messages": [
{
"role": "user",
"content": [
{
"text": "What is RAG?"
}
]
},
{
"role": "assistant",
"content": [
{
"text": "RAG means retrieval-augmented generation."
}
]
},
{
"role": "user",
"content": [
{
"text": "Explain it in simple terms.\nWhat is the weather in Chicago?"
}
]
},
{
"role": "assistant",
"content": [
{
"toolUse": {
"toolUseId": "tool_call_001",
"input": {
"city": "Chicago",
"unit": "fahrenheit"
},
"name": "get_weather"
}
}
]
},
{
"role": "user",
"content": [
{
"toolResult": {
"content": [
{
"text": "{\"city\": \"Chicago\", \"unit\": \"fahrenheit\", \"temperature\": 72, \"condition\": \"Partly cloudy\"}"
}
],
"toolUseId": "tool_call_001",
"status": "success"
}
}
]
},
{
"role": "assistant",
"content": [
{
"text": "The current weather in Chicago is 72°F and partly cloudy."
}
]
}
],
"system": [
{
"text": "You are a helpful assistant."
}
]
}
Define the Bedrock Tool Schema
To enable tool use for the next llm response, you provide a toolConfig in client.converse(), which tells Amazon Bedrock which tools are available.
tool_config = {
"tools": [
{
"toolSpec": {
"name": "get_weather",
"description": "Get the current weather for a city in the requested temperature unit.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get the weather for, such as Chicago.",
},
"unit": {
"type": "string",
"enum": ["fahrenheit", "celsius"],
"description": "The temperature unit to return.",
},
},
"required": ["city", "unit"],
}
},
}
}
]
}
tool_config
Optional Bedrock Converse Call
Note: AWS credentials and Bedrock model access required.
import boto3
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.converse(
modelId="anthropic.claude-3-5-sonnet-20240620-v1:0",
**restored_bedrock_payload,
toolConfig=tool_config,
inferenceConfig={
"maxTokens": 1000,
"temperature": 0.2,
},
)
print(json.dumps(response, indent=4, ensure_ascii=False))
Important Warning About _messages_to_bedrock
The function _messages_to_bedrock starts with an underscore:
from langchain_aws.chat_models.bedrock_converse import _messages_to_bedrock
Because this is an internal helper, LangChain AWS may change it in a future release. For a tutorial, using it inline makes the flow easier to understand. In a production app, you may want to extract this logic into your own function.
Final Takeaway
Use messages_to_dict(updated_messages) when saving LangChain messages.
Use messages_from_dict(raw_messages) when restoring LangChain messages.
Use convert_to_openai_messages(updated_messages) when you need OpenAI chat-completions format.
Use _messages_to_bedrock(updated_messages) when you need Amazon Bedrock Converse format.
Use toolConfig=tool_config when you want Bedrock Converse to know which tools are available for the llm model response.
Top comments (0)