Let's take a look at how to use LLM classifiers in LangChain JS to dynamically select prompts based on the query type. This will allow us to tailor more context-aware responses to the user's questions.
Based on the input we receive from users, we may want to call the LLM with different prompts.
For example, we may want to use an "AI specialist", with improved topic context, based on the query type.
Check out the prompts below:
const carrotsTemplate = `You are an expert in carrots.
Always answer questions starting with "As š Bugs Bunny says:
Respond to the following question:
{question}`
const lasagnaTemplate = `You are an expert in lasagna.
Always answer questions starting with "As šø Garfield says:
Respond to the following question:
{question}`
const generalTemplate = `You are a helpful assistant.
Respond to the following question:
{question}`
// š„ if {question} is about carrots use carrotsTemplate
// š„ if {question} is about lasagna use lasagnaTemplate
// š² otherwise use generalTemplate
Each of these prompts will need to be called depending on the main topic of the question input.
Making the LLM Classifiers Function
The first thing we need to create is a classifier function.
This JavaScript function will later serve as the routing point for the chain:
const TOPIC_CARROTS = `Carrots`
const TOPIC_LASAGNA = `Lasagna`
const classificationChain = PromptTemplate.fromTemplate(
`You are good at classifying a question.
Given the user question below, classify it as either being about:
- ${TOPIC_CARROTS}
- ${TOPIC_LASAGNA}
- or "Other".
Do not respond with more than one word.
<question>
{question}
</question>
Classification:`
).pipe(model).pipe(stringParser)
const promptRouter = async (question) => {
const type = await classificationChain.invoke(question)
if (type === TOPIC_CARROTS)
return PromptTemplate.fromTemplate(carrotsTemplate)
if (type === TOPIC_LASAGNA)
return PromptTemplate.fromTemplate(lasagnaTemplate)
return PromptTemplate.fromTemplate(generalTemplate)
}
There are multiple ways we can do the actual classification, but one of the easiest approaches is just to ask an LLM to classify the input. This is what we are doing in the above code via the classificationChain.
Using Routing in LangChain to Call Different Prompts Based on the Query Type
Now that we have the promptRouter ready, we can easily wrap this JavaScript function in a RunnableLambda and add it to a chain.
Below is the full code of the example:
import { ChatOpenAI } from "@langchain/openai"
import { StringOutputParser } from "@langchain/core/output_parsers"
import { PromptTemplate } from "@langchain/core/prompts"
import {
RunnableLambda, RunnablePassthrough
} from "@langchain/core/runnables"
import * as dotenv from "dotenv"
dotenv.config()
const model = new ChatOpenAI({})
const stringParser = new StringOutputParser()
const carrotsTemplate = `You are an expert in carrots.
Always answer questions starting with "As š Bugs Bunny says:
Respond to the following question:
{question}`
const lasagnaTemplate = `You are an expert in lasagna.
Always answer questions starting with "As šø Garfield says:
Respond to the following question:
{question}`
const generalTemplate = `You are a helpful assistant.
Respond to the following question:
{question}`
const TOPIC_CARROTS = `Carrots`
const TOPIC_LASAGNA = `Lasagna`
const classificationChain = PromptTemplate.fromTemplate(
`You are good at classifying a question.
Given the user question below, classify it as either being about:
- ${TOPIC_CARROTS}
- ${TOPIC_LASAGNA}
- or "Other".
Do not respond with more than one word.
<question>
{question}
</question>
Classification:`
).pipe(model).pipe(stringParser)
const promptRouter = async (question) => {
const type = await classificationChain.invoke(question)
if (type === TOPIC_CARROTS)
return PromptTemplate.fromTemplate(carrotsTemplate)
if (type === TOPIC_LASAGNA)
return PromptTemplate.fromTemplate(lasagnaTemplate)
return PromptTemplate.fromTemplate(generalTemplate)
}
const chain = new RunnablePassthrough()
.pipe(RunnableLambda.from(promptRouter))
.pipe(model)
.pipe(stringParser)
const result = await chain.invoke(
{question: `What makes a good lasagna?`}
)
console.log(result)
The above code will use the lasagnaTemplate to answer the question:

On the other hand, if we call the chain with this question:
chain.invoke({question: `What makes a good carrot salad?`})
The promptRouter will use the carrotsTemplate, giving the following output:

By the way, you can also use the RunnableBranch to do routing in LangChain.js.
And there you have it! This is how we can add simple routing to LangChain using an LLM-powered classifier in JavaScript. The full code of the example is available on my GitHub.
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js
š Build a full trivia game app with LangChain
Learn by doing with this FREE ebook! This 35-page guide walks you through every step of building your first fully functional AI-powered app using JavaScript and LangChain.js