Handle parsing errors
Occasionally the LLM cannot determine what step to take because it outputs format in incorrect form to be handled by the output parser.
In this case, by default the agent errors. You can control this functionality by passing handleParsingErrors
when initializing the agent
executor. This field can be a boolean, a string, or a function:
- Passing
true
will pass a generic error back to the LLM along with the parsing error text for a retry. - Passing a string will return that value along with the parsing error text. This is helpful to steer the LLM in the right direction.
- Passing a function that takes an
OutputParserException
as a single argument allows you to run code in response to an error and return whatever string you'd like.
Here's an example where the model initially tries to set "Reminder"
as the task type instead of an allowed value:
import { z } from "zod";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { DynamicStructuredTool } from "langchain/tools";
const model = new ChatOpenAI({ temperature: 0.1 });
const tools = [
new DynamicStructuredTool({
name: "task-scheduler",
description: "Schedules tasks",
schema: z
.object({
tasks: z
.array(
z.object({
title: z
.string()
.describe("The title of the tasks, reminders and alerts"),
due_date: z
.string()
.describe("Due date. Must be a valid JavaScript date string"),
task_type: z
.enum([
"Call",
"Message",
"Todo",
"In-Person Meeting",
"Email",
"Mail",
"Text",
"Open House",
])
.describe("The type of task"),
})
)
.describe("The JSON for task, reminder or alert to create"),
})
.describe("JSON definition for creating tasks, reminders and alerts"),
func: async (input: { tasks: object }) => JSON.stringify(input),
}),
];
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "openai-functions",
verbose: true,
handleParsingErrors:
"Please try again, paying close attention to the allowed enum values",
});
console.log("Loaded agent.");
const input = `Set a reminder to renew our online property ads next week.`;
console.log(`Executing with input "${input}"...`);
const result = await executor.invoke({ input });
console.log({ result });
/*
{
result: {
output: 'I have set a reminder for you to renew your online property ads on October 10th, 2022.'
}
}
*/
API Reference:
- ChatOpenAI from
langchain/chat_models/openai
- initializeAgentExecutorWithOptions from
langchain/agents
- DynamicStructuredTool from
langchain/tools
This is what the resulting trace looks like - note that the LLM retries before correctly choosing a matching enum:
https://smith.langchain.com/public/b00cede1-4aca-49de-896f-921d34a0b756/r