ChatGoogleVertexAI
LangChain.js supports Google Vertex AI chat models as an integration. It supports two different methods of authentication based on whether you're running in a Node environment or a web environment.
Setup
Node
To call Vertex AI models in Node, you'll need to install Google's official auth client as a peer dependency.
You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods:
- You are logged into an account (using
gcloud auth application-default login
) permitted to that project. - You are running on a machine using a service account that is permitted to the project.
- You have downloaded the credentials for a service account that is permitted
to the project and set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable to the path of this file.
- npm
- Yarn
- pnpm
npm install google-auth-library
yarn add google-auth-library
pnpm add google-auth-library
Web
To call Vertex AI models in web environments (like Edge functions), you'll need to install
the web-auth-library
pacakge as a peer dependency:
- npm
- Yarn
- pnpm
npm install web-auth-library
yarn add web-auth-library
pnpm add web-auth-library
Then, you'll need to add your service account credentials directly as a GOOGLE_VERTEX_AI_WEB_CREDENTIALS
environment variable:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
You can also pass your credentials directly in code like this:
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai/web";
const model = new ChatGoogleVertexAI({
authOptions: {
credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...},
},
});
Usage
Several models are available and can be specified by the model
attribute
in the constructor. These include:
- code-bison (default)
- code-bison-32k
The ChatGoogleVertexAI class works just like other chat-based LLMs, with a few exceptions:
- The first
SystemMessage
passed in is mapped to the "context" parameter that the PaLM model expects. No otherSystemMessages
are allowed. - After the first
SystemMessage
, there must be an odd number of messages, representing a conversation between a human and the model. - Human messages must alternate with AI messages.
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";
// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai/web";
const model = new ChatGoogleVertexAI({
temperature: 0.7,
});
API Reference:
- ChatGoogleVertexAI from
langchain/chat_models/googlevertexai
Streaming
ChatGoogleVertexAI also supports streaming in multiple chunks for faster responses:
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";
// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai/web";
const model = new ChatGoogleVertexAI({
temperature: 0.7,
});
const stream = await model.stream([
["system", "You are a funny assistant that answers in pirate language."],
["human", "What is your favorite food?"],
]);
for await (const chunk of stream) {
console.log(chunk);
}
/*
AIMessageChunk {
content: ' Ahoy there, matey! My favorite food be fish, cooked any way ye ',
additional_kwargs: {}
}
AIMessageChunk {
content: 'like!',
additional_kwargs: {}
}
AIMessageChunk {
content: '',
name: undefined,
additional_kwargs: {}
}
*/
API Reference:
- ChatGoogleVertexAI from
langchain/chat_models/googlevertexai
Examples
There is also an optional examples
constructor parameter that can help the model understand what an appropriate response
looks like.
import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";
// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai/web";
const examples = [
{
input: new HumanMessage("What is your favorite sock color?"),
output: new AIMessage("My favorite sock color be arrrr-ange!"),
},
];
const model = new ChatGoogleVertexAI({
temperature: 0.7,
examples,
});
const questions = [
new SystemMessage(
"You are a funny assistant that answers in pirate language."
),
new HumanMessage("What is your favorite food?"),
];
// You can also use the model as part of a chain
const res = await model.invoke(questions);
console.log({ res });
API Reference:
- AIMessage from
langchain/schema
- HumanMessage from
langchain/schema
- SystemMessage from
langchain/schema
- ChatGoogleVertexAI from
langchain/chat_models/googlevertexai