Prompt + LLM
One of the most foundational Expression Language compositions is taking:
PromptTemplate
/ ChatPromptTemplate
-> LLM
/ ChatModel
-> OutputParser
Almost all other chains you build will use this building block.
Interactive tutorial
The screencast below interactively walks through a simple prompt template + LLM chain. You can update and run the code as it's being written in the video!
PromptTemplate + LLM
A PromptTemplate -> LLM is a core chain that is used in most other larger chains/systems.
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);
const chain = promptTemplate.pipe(model);
const result = await chain.invoke({ topic: "bears" });
console.log(result);
/*
AIMessage {
content: "Why don't bears wear shoes?\n\nBecause they have bear feet!",
}
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- PromptTemplate from
@langchain/core/prompts
Often times we want to attach kwargs to the model that's passed in. To do this, runnables contain a .bind
method. Here's how you can use it:
Attaching stop sequences
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {subject}`);
const model = new ChatOpenAI({});
const chain = prompt.pipe(model.bind({ stop: ["\n"] }));
const result = await chain.invoke({ subject: "bears" });
console.log(result);
/*
AIMessage {
contents: "Why don't bears use cell phones?"
}
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- PromptTemplate from
@langchain/core/prompts
Attaching function call information
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {subject}`);
const model = new ChatOpenAI({});
const functionSchema = [
{
name: "joke",
description: "A joke",
parameters: {
type: "object",
properties: {
setup: {
type: "string",
description: "The setup for the joke",
},
punchline: {
type: "string",
description: "The punchline for the joke",
},
},
required: ["setup", "punchline"],
},
},
];
const chain = prompt.pipe(
model.bind({
functions: functionSchema,
function_call: { name: "joke" },
})
);
const result = await chain.invoke({ subject: "bears" });
console.log(result);
/*
AIMessage {
content: "",
additional_kwargs: {
function_call: {
name: "joke",
arguments: '{\n "setup": "Why don\'t bears wear shoes?",\n "punchline": "Because they have bear feet!"\n}'
}
}
}
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- PromptTemplate from
@langchain/core/prompts
PromptTemplate + LLM + OutputParser
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
We can also add in an output parser to conveniently transform the raw LLM/ChatModel output into a consistent string format:
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);
const outputParser = new StringOutputParser();
const chain = RunnableSequence.from([promptTemplate, model, outputParser]);
const result = await chain.invoke({ topic: "bears" });
console.log(result);
/*
"Why don't bears wear shoes?\n\nBecause they have bear feet!"
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- PromptTemplate from
@langchain/core/prompts
- RunnableSequence from
@langchain/core/runnables
- StringOutputParser from
@langchain/core/output_parsers