Setup:
Install @langchain/google-vertexai and set your stringified
Vertex AI credentials as an environment variable named GOOGLE_APPLICATION_CREDENTIALS.
Runtime args can be passed as the second argument to any of the base runnable methods .invoke. .stream, .batch, etc.
They can also be passed via .withConfig, or the second arg in .bindTools, like shown in the examples below:
// When calling `.withConfig`, call options should be passed via the first argument constllmWithArgsBound = llm.withConfig({ stop: ["\n"], tools: [...], });
// When calling `.bindTools`, call options should be passed via the second argument constllmWithTools = llm.bindTools( [...], { tool_choice:"auto", } );
constllm = newChatVertexAI({ model:"gemini-1.5-pro", temperature:0, // other params... });
Invoking
constinput = `Translate "I love programming" into French.`;
// Models also accept a list of chat messages or a formatted prompt constresult = awaitllm.invoke(input); console.log(result);
AIMessageChunk {
"content": "\"J'adore programmer\" \n\nHere's why this is the best translation:\n\n* **J'adore** means \"I love\" and conveys a strong passion.\n* **Programmer** is the French verb for \"to program.\"\n\nThis translation is natural and idiomatic in French. \n",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [],
"tool_call_chunks": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 9,
"output_tokens": 63,
"total_tokens": 72
}
}
constGetWeather = { name:"GetWeather", description:"Get the current weather in a given location", schema:z.object({ location:z.string().describe("The city and state, e.g. San Francisco, CA") }), }
constGetPopulation = { name:"GetPopulation", description:"Get the current population in a given location", schema:z.object({ location:z.string().describe("The city and state, e.g. San Francisco, CA") }), }
constllmWithTools = llm.bindTools([GetWeather, GetPopulation]); constaiMsg = awaitllmWithTools.invoke( "Which city is hotter today and which is bigger: LA or NY?" ); console.log(aiMsg.tool_calls);
[
{
name: 'GetPopulation',
args: { location: 'New York City, NY' },
id: '33c1c1f47e2f492799c77d2800a43912',
type: 'tool_call'
}
]
Structured Output
import { z } from'zod';
constJoke = z.object({ setup:z.string().describe("The setup of the joke"), punchline:z.string().describe("The punchline to the joke"), rating:z.number().optional().describe("How funny the joke is, from 1 to 10") }).describe('Joke to tell user.');
conststructuredLlm = llm.withStructuredOutput(Joke, { name:"Joke" }); constjokeResult = awaitstructuredLlm.invoke("Tell me a joke about cats"); console.log(jokeResult);
{
setup: 'What do you call a cat that loves to bowl?',
punchline: 'An alley cat!'
}
Integration with Google Vertex AI chat models.
Setup: Install
@langchain/google-vertexai
and set your stringified Vertex AI credentials as an environment variable namedGOOGLE_APPLICATION_CREDENTIALS
.Constructor args
Runtime args
Runtime args can be passed as the second argument to any of the base runnable methods
.invoke
..stream
,.batch
, etc. They can also be passed via.withConfig
, or the second arg in.bindTools
, like shown in the examples below:Examples
Instantiate
Invoking
Streaming Chunks
Aggregate Streamed Chunks
Bind tools
Structured Output
Usage Metadata
Stream Usage Metadata