🎁 Checkout my Learn React by Making a Game course and get 1 month Free on Skillshare!

LangChain Js – an intro to prompt templates, partial templates and composition

LangChain is designed to simplify the creation of applications using LLMs. Its scope is to simplify any task you wish to accomplish using LLMs.

Given that LLMs have text as their main inputs and outputs it's natural that LangChain has a core module dedicated to formating and interacting with prompts. In their essence, LLMs are giant functions that map text to text.

LangChain supports both JavaScript and Python. In this article, we will use JavaScript as the language for our examples.

The PromptTemplate class in LangChain Js

The core class for handling input prompts in LangChain is the PromptTemplate class.

The PromptTemplate allows you to create templates that can be dynamically filled in with data.Β 

You can think of LangChain's PromptTemplate a bit like Javascript's template literals or Lodash's template function. But more adapted to the interactions with an LLM.

Here's a very simple JS example:

import { PromptTemplate } from 'langchain/prompts'

const template = 'Tell me a story about {characters}.'

 // `fromTemplate` will interpolate the `inputVariables`
const promptFromTemplate = PromptTemplate.fromTemplate(template)

// the constructor is used to define the `inputVariables`
const promptViaConstructor = new PromptTemplate({
    inputVariables: ['characters'],
    template
})

const formattedTemplate = await promptFromTemplate.format({
    adjective: 'cats'
})

console.log(formattedTemplate) // Tell me a story about {cats}.

Partial Templates in LangChain Js

Sometimes, you'll get some values for a prompt, but not all of them. Later, you'll get the rest. Without LangChain, handling all that scoping is tough, but LangChain makes it simple.

This is where the partial() method of PromptTemplate comes into play. It also helps to call functions like getting today's date.

Let's see a JavaScript example of it's usage:

import { PromptTemplate } from 'langchain/prompts'

const getDateNow ()=> new Date().toLocaleDateString()

const template =
    'I initially visited {contry1}, but as of {now}, 
    I visited {contry2}, as well.'

const prompt = new PromptTemplate({
    template,
    inputVariables: ['now', 'contry1', 'contry2']
})

const partial = await prompt.partial({
    now: getDateNow,
    contry1: 'πŸ‡ͺπŸ‡Έ Spain'
})

const formatted = await partial.format({ contry2: 'πŸ‡ΊπŸ‡Έ USA' })

console.log(formatted)
// I initially visited πŸ‡ͺπŸ‡Έ Spain, but as of 5/8/2024, 
// I visited πŸ‡ΊπŸ‡Έ USA, as well.

The PipelinePromptTemplate class and template composition

Most advanced prompts have multiple sections that all need to be brought together. With LangChain we can consolidate more the one PromptTemplate's into a single PromptTemplate object.

This is very useful for few shot prompting, when are are giving examples the the LLM.

To implement this behavior we will need to add the PipelinePromptTemplate class.

import { PipelinePromptTemplate, PromptTemplate } from 'langchain/prompts'

const fullTemplate = ` {intro}

    {example}

    {actual}`

const fullPrompt = PromptTemplate.fromTemplate(fullTemplate)

const introTemplate = `You are impersonating {who}`

const exampleTemplate = `Here's an example Q&A:
    Q: {exampleQuestion}
    A: {exampleAnswer}
`

const actualTemplate = `Here's the next Q&A:
    Q: {actualQuestion}
    A:
`

const introPrompt = PromptTemplate.fromTemplate(introTemplate)
const examplePrompt = PromptTemplate.fromTemplate(exampleTemplate)
const actualPrompt = PromptTemplate.fromTemplate(actualTemplate)

const composedPrompt = new PipelinePromptTemplate({
    pipelinePrompts: [
        { name: 'intro', prompt: introPrompt },
        { name: 'example', prompt: examplePrompt },
        { name: 'actual', prompt: actualPrompt },
    ],
    finalPrompt: fullPrompt,
})

const finalPrompt = await composedPrompt.format({
    who: 'Luke Skywalker',
    exampleQuestion: 'What is you main weapon ?',
    exampleAnswer: "It's the lightsaber.",
    actualQuestion: 'Who is you father?'
})

console.log(finalPrompt)

And the output of the below code will be:

/*
    You are impersonating Luke Skywalker

    Here's an example Q&A:
      Q: What is you main weapon ?
      A: It's the lightsaber.

    Here's the next Q&A:
      Q: Who is you father?
      A: 
*/

Moving forward you can take a look at how to use the StructuredOutputParser with Zod schema in in LangChain and see this full example for the first app with LangChain and JavaScript.

πŸ“– 50 Javascript, React and NextJs Projects

Learn by doing with this FREE ebook! Not sure what to build? Dive in with 50 projects with project briefs and wireframes! Choose from 8 project categories and get started right away.

πŸ“– 50 Javascript, React and NextJs Projects

Learn by doing with this FREE ebook! Not sure what to build? Dive in with 50 projects with project briefs and wireframes! Choose from 8 project categories and get started right away.


Leave a Reply

Your email address will not be published. Required fields are marked *