This is a two-part article about using tool calling in LangChain.js.
In this first part, we will cover the following topics:
- What are tools and functions
- How tool calling is different from RAG
- Tool calling vs. invoking the function
- Models support for tool calling in LangChain.js
Tomorrow, I will publish the second part, detailing how to define a tool and its function schemas in LangChain.js,how to use the ToolMessage class to integrate the results into the conversation and more
Tool Calling and Functions in LangChain.js
What is tool calling? Tool calling is a way to provide an LLM with a set of function schemas that the model takes as input. The model decides if it needs to use one or more functions to answer a user's question.
Tool calling is often linked to function calling. The difference is that the tool calling API allows calling multiple functions. One tool can use multiple functions.
If a tool is made of multiple functions, when and why do we want to use functions? Functions are helpful if you want to provide information to a user that was not trained in the model.
Let's take an example. A user wants to go for a hike during the weekend in a specific city and book a room in a hotel for an overnight stay.
This is how the initial prompt for the LLM will look:
`How will the weather be in Valencia this weekend?
I would like to go for a weekend-long hike and book one
room for Saturday night.`
To answer the above request, the model needs a tool that uses two functions for API calls:
weatherApi(city)- checks if the weather is suitable for a hike in a given city;hotelsAvailability(city)- checks if any hotel rooms are available in a city.
How Tool Calling is Different from RAG
We need to keep in mind that toll calling is not the same as RAG - Retrieval Augmented Generation.
While both tool calling and RAG provide a model the ability to use information outside its training data set, in tool calling, that information is usually dynamically retrieved from an API and is not saved in a vector store.
In the above example, the information about the weather and the available hotel rooms can't be stored in a vector store because it changes very rapidly.
Tool Calling Does Not Invoke the Actual Function
The name "tool calling" implies that the LLM performs some action on its own, but that's not the case. This is a common source of confusion.
The LLM only suggests the function names and the arguments that function should use when invoked. It's up to us to run the functions with the suggested arguments.
In the above example, the result of tool calling means that the LLM will decide that it needs to call the following:
[
{
functionName: 'weatherApi',
parameters: [{
name: 'city',
value: 'Valencia'
}]
},
{
functionName: 'hotelsAvailability',
parameters: [{
name: 'city',
value: 'Valencia'
}]
}
]
However, the model will not make the actual call of these functions. It's up to us to invoke the functions requested by the LLM and pass in the parameters and their determined values.
Tool Calling Support in LangChain.js
Note that not all the models available in LangChain.js support tool calling.
You can check out this table to see if a chat model can handle tool calls.

So, before trying to use tool calling, make sure your selected model supports this feature.
Next, be sure to check the second part of this article we he go throught a full JavaScript example of using Tool Calling with LangChain.js.
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js