Skip to main content

Python interpreter tool

danger

This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it!

LangChain offers an experimental tool for executing arbitrary Python code. This can be useful in combination with an LLM that can generate code to perform more powerful computations.

Usage

import { ChatPromptTemplate } from "langchain/prompts";
import { OpenAI } from "langchain/llms/openai";
import { PythonInterpreterTool } from "langchain/experimental/tools/pyinterpreter";
import { StringOutputParser } from "langchain/schema/output_parser";

const prompt = ChatPromptTemplate.fromTemplate(
`Generate python code that does {input}. Do not generate anything else.`
);

const model = new OpenAI({});

const interpreter = await PythonInterpreterTool.initialize({
indexURL: "../node_modules/pyodide",
});
const chain = prompt
.pipe(model)
.pipe(new StringOutputParser())
.pipe(interpreter);

const result = await chain.invoke({
input: `prints "Hello LangChain"`,
});

console.log(JSON.parse(result).stdout);

API Reference: