We covered what tool calling is and its specifics in the first part of this article. I recommend completing this first part before starting this one.
In the second part, we will see how to implement and use tool calling with LangChain.js.
Note that tool calling is only available in @langchain/core version 0.2.7 and above. You can find the official documentation here.
Let's get to work!
Use Case Description
The use case of our example is to create an AI assistant that can help the user with a request similar to the following:
`How will the weather be in Valencia this weekend?
I would like to go for a weekend-long hike and book one room for Saturday.`
If we run this prompt as it is, we will get this response from the LLM:
I'm sorry, but I am unable to provide real-time weather updates.
I recommend checking a reliable weather website or app for the
most up-to-date information on the weather in Valencia this weekend.
Additionally, I suggest booking a room in advance to ensure availability
for your weekend hike. Enjoy your trip!
It's clear that the model does not have all the data, so we need to provide it with tools to retrieve this information.
Defining Schemas and Tools in LangChain.js
To answer the above request, our LLM will need 2 tools:
- One tool to retrieve weather data for a city.
- Another tool to check if rooms are available in a city for a given day.
The first thing we need to do is define schemas for the tools. At this moment, the schema will only contain the parameters used by the tool functions and their descriptions.
The Zod schema library is perfect for this task.
import { z } from "zod"
const weatherApiSchema = z.object({
city: z.string().describe("The name of the city")
})
const hotelsAvailabilitySchema = z.object({
city: z.string().describe("The name of the city"),
day: z.string().describe("Day of the week to book the hotel"),
})
Remember that later on the LLM will use these description to determine what values should be assigned to the parameters.
Based on these schemas, we can define the tools and pass them to the LLM:
import { tool } from "@langchain/core/tools"
import { ChatOpenAI } from "@langchain/openai"
const llm = new ChatOpenAI({ temperature: 0 })
const weatherApiTool = tool(
async ({ city }) => {
return `The weather in ${city} is sunny, 20°C`
},
{
name: "weatherApi",
description: "Check the weather in a specified city.",
schema: weatherApiSchema,
}
)
const hotelsAvailabilityTool = tool(
async ({ city, day }) => {
return `Hotel rooms in ${city} are available for ${day}.`
},
{
name: "hotelsAvailability",
description: "Check if hotels are available in a given city.",
schema: hotelsAvailabilitySchema,
}
)
const llmWithTools = llm.bindTools([
weatherApiTool,
hotelsAvailabilityTool
])
Some things to notice here:
- Descriptions are crucial, as they will be passed along to the model along with the class name.
- Functions must return strings.
- The tools' implementations are just mockups. In production, static strings such as
The weather in ${city} is sunny, 20°Cwill be replaced with actual API calls. - Not all models support tool calling. Be sure to use a model that supports this feature.
Invoking the Tools, the Tool Calls & Finish Reason Fields, and the ToolMessage Class
Having these tools defined, if we invoke the model, we will see that it will not return anything:
let llmOutput = await llmWithTools.invoke(`
How will the weather be in Valencia this weekend?
I would like to go for a weekend-long hike and book one room for Saturday.`)
console.log(llmOutput)
// ā ļø the llmOutput.content is empty
// "content": ""
The LLM is not making the actual tool calling. The LLM only suggests the function names and the arguments that the function should use when invoked. It's up to us to run the functions with the suggested arguments.
We need to check if the response_metadata.finish_reason field is set to the tool_calls value. If so, it's up to us to go through the tool_calls array and invoke the tools.
In this particular case, the tool_calls[] array will contain the following values:
"tool_calls": [
{
"name": "weatherApi",
"args": {
"city": "Valencia"
},
"type": "tool_call",
"id": "call_wLNjQ1MdekA2YSzFyANSQdBt"
},
{
"name": "hotelsAvailability",
"args": {
"city": "Valencia",
"day": "Saturday"
},
"type": "tool_call",
"id": "call_yY5ZAFKwcMGqRFM4RGgSGkj2"
}
]
These tools need to be executed, and the values for args need to be passed in as parameters:
let toolMapping = {
"weatherApi": weatherApiTool,
"hotelsAvailability": hotelsAvailabilityTool
}
for await (let toolCall of llmOutput.tool_calls) {
let tool = toolMapping[toolCall["name"]]
let toolOutput = await tool.invoke(toolCall.args)
let newTM = new ToolMessage({
tool_call_id: toolCall.id,
content: toolOutput
})
messages.push(newTM)
}
llmOutput = await llmWithTools.invoke(messages)
console.log(llmOutput)
An easy way to integrate the info retrieved by a tool is to use the @langchain/core/messages/ToolMessage utility class.
If we take a look at the code for the @langchain/core/tools/tool class, we will see it implements the Runnable interface; therefore, it supports the standard LCEL calling via the invoke() function.
Putting It All Together
This is what a full example may look like:
import { tool } from "@langchain/core/tools"
import { z } from "zod"
import { ChatOpenAI } from "@langchain/openai"
import { HumanMessage, ToolMessage } from "@langchain/core/messages"
import * as dotenv from "dotenv"
dotenv.config()
const llm = new ChatOpenAI({ temperature: 0 })
const weatherApiSchema = z.object({
city: z.string().describe("The name of the city")
})
const weatherApiTool = tool(
async ({ city }) => {
return `The weather in ${city} is sunny, 20°C`
},
{
name: "weatherApi",
description: "Check the weather in a specified city.",
schema: weatherApiSchema,
}
)
const hotelsAvailabilitySchema = z.object({
city: z.string().describe("The name of the city"),
day: z.string().describe("Day of the week to book the hotel"),
})
const hotelsAvailabilityTool = tool(
async ({ city, day }) => {
return `Hotel rooms in ${city} are available for ${day}.`
},
{
name: "hotelsAvailability",
description: "Check if hotels are available in a given city.",
schema: hotelsAvailabilitySchema,
}
)
const llmWithTools = llm.bindTools([
weatherApiTool,
hotelsAvailabilityTool
])
let messages = [
new HumanMessage(`How will the weather be in Valencia this weekend?
I would like to go for a weekend-long hike and book one room for Saturday.`)
]
let llmOutput = await llmWithTools.invoke(messages)
messages.push(llmOutput)
let toolMapping = {
"weatherApi": weatherApiTool,
"hotelsAvailability": hotelsAvailabilityTool
}
for await (let toolCall of llmOutput.tool_calls) {
let tool = toolMapping[toolCall["name"]]
let toolOutput = await tool.invoke(toolCall.args)
let newTM = new ToolMessage({
tool_call_id: toolCall.id,
content: toolOutput
})
messages.push(newTM)
}
llmOutput = await llmWithTools.invoke(messages)
console.log(llmOutput)
At this point, the response_metadata.finish_reason field will be set to stop, and the content property of the response will look like this:
The weather in Valencia this weekend will be sunny with a temperature of 20°C. Hotel rooms in Valencia are available for Saturday, so you can book a room for your weekend hike.
As a test, we can update the response from the weatherApiTool to be The weather in ${city} is heavy snow, -15°C.
Now, if we run the example again, we will get:
The weather in Valencia this weekend
is heavy snow with a temperature of -15°C.
It may not be the best time to go for a hike.
Hotel rooms in Valencia are available for Saturday.
You can get the full code of this example from my GitHub.
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js