Handle parsing errors
Occasionally the LLM cannot determine what step to take because it outputs format in incorrect form to be handled by the output parser.
In this case, by default the agent errors. You can control this functionality by passing handleParsingErrors
when initializing the agent
executor. This field can be a boolean, a string, or a function:
- Passing
true
will pass a generic error back to the LLM along with the parsing error text for a retry. - Passing a string will return that value along with the parsing error text. This is helpful to steer the LLM in the right direction.
- Passing a function that takes an
OutputParserException
as a single argument allows you to run code in response to an error and return whatever string you'd like.
Here's an example where the model initially tries to set "Reminder"
as the task type instead of an allowed value:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { z } from "zod";
import type { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { pull } from "langchain/hub";
import { DynamicStructuredTool } from "@langchain/core/tools";
const model = new ChatOpenAI({ temperature: 0.1 });
const tools = [
new DynamicStructuredTool({
name: "task-scheduler",
description: "Schedules tasks",
schema: z
.object({
tasks: z
.array(
z.object({
title: z
.string()
.describe("The title of the tasks, reminders and alerts"),
due_date: z
.string()
.describe("Due date. Must be a valid JavaScript date string"),
task_type: z
.enum([
"Call",
"Message",
"Todo",
"In-Person Meeting",
"Email",
"Mail",
"Text",
"Open House",
])
.describe("The type of task"),
})
)
.describe("The JSON for task, reminder or alert to create"),
})
.describe("JSON definition for creating tasks, reminders and alerts"),
func: async (input: { tasks: object }) => JSON.stringify(input),
}),
];
// Get the prompt to use - you can modify this!
// If you want to see the prompt in full, you can at:
// https://smith.langchain.com/hub/hwchase17/openai-functions-agent
const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);
const agent = await createOpenAIFunctionsAgent({
llm: model,
tools,
prompt,
});
const agentExecutor = new AgentExecutor({
agent,
tools,
verbose: true,
handleParsingErrors:
"Please try again, paying close attention to the allowed enum values",
});
console.log("Loaded agent.");
const input = `Set a reminder to renew our online property ads next week.`;
console.log(`Executing with input "${input}"...`);
const result = await agentExecutor.invoke({ input });
console.log({ result });
/*
{
result: {
input: 'Set a reminder to renew our online property ads next week.',
output: 'I have set a reminder for you to renew your online property ads on October 10th, 2022.'
}
}
*/
API Reference:
- ChatPromptTemplate from
@langchain/core/prompts
- ChatOpenAI from
@langchain/openai
- AgentExecutor from
langchain/agents
- createOpenAIFunctionsAgent from
langchain/agents
- pull from
langchain/hub
- DynamicStructuredTool from
@langchain/core/tools
This is what the resulting trace looks like - note that the LLM retries before correctly choosing a matching enum:
https://smith.langchain.com/public/b00cede1-4aca-49de-896f-921d34a0b756/r