# JavaScript / TypeScript API Reference

API reference documentation for LangChain JavaScript / TypeScript packages.

## Packages

- [LangChain](/javascript/langchain)
  # 🦜️🔗 LangChain.js

![npm](https://img.shields.io/npm/dm/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.

**Documentation**: To learn more about LangChain, check out [the docs](https://docs.langchain.com/oss/javascript/langchain/overview).

If you're looking for more advanced customization or agent orchestration, check out [LangGraph.js](https://langchain-ai.github.io/langgraphjs/). our framework for building agents and controllable workflows.

> [!NOTE]
> Looking for the Python version? Check out [LangChain](https://github.com/langchain-ai/langchain).

To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.

## ⚡️ Quick Install

You can use npm, pnpm, or yarn to install LangChain.js

```sh
npm install -S langchain
# or
pnpm install langchain
# or
yarn add langchain
```

## 🚀 Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for agents, models, embeddings, vector stores, and more.

Use LangChain for:

- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain’s vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application’s needs. As the industry frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without losing momentum.

## 📦 LangChain's ecosystem

- [LangSmith](https://www.langchain.com/langsmith) - Unified developer platform for building, testing, and monitoring LLM applications. With LangSmith, you can debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and deploy agents with confidence.
- [LangGraph](https://docs.langchain.com/oss/javascript/langgraph/overview) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [Deep Agents](https://docs.langchain.com/oss/javascript/deepagents/overview) - Build sophisticated "deep" agents that go beyond simple tool-calling loops. Deep Agents combines planning tools, sub-agent spawning, file system access, and detailed prompts to handle complex, multi-step tasks — inspired by applications like Claude Code and Deep Research.

## 🌐 Supported Environments

LangChain.js is written in TypeScript and can be used in:

- Node.js (ESM and CommonJS) - 20.x, 22.x, 24.x
- Cloudflare Workers
- Vercel / Next.js (Browser, Serverless and Edge functions)
- Supabase Edge Functions
- Browser
- Deno
- Bun

## 📖 Additional Resources

- [Getting started](https://docs.langchain.com/oss/javascript/langchain/overview): Installation, setting up the environment, simple examples
- [Learn](https://docs.langchain.com/oss/javascript/langchain/learn): Learn about the core concepts of LangChain.
- [LangChain Forum](https://forum.langchain.com): Connect with the community and share all of your technical questions, ideas, and feedback.
- [Chat LangChain](https://chat.langchain.com): Ask questions & chat with our documentation.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see [here](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md).

Please report any security issues or concerns following our [security guidelines](https://github.com/langchain-ai/.github/blob/main/SECURITY.md).
- [LangChain Core](/javascript/langchain-core)
  # 🦜🍎️ @langchain/core

![npm](https://img.shields.io/npm/dm/@langchain/core) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

`@langchain/core` contains the core abstractions and schemas of LangChain.js, including base classes for language models,
chat models, vectorstores, retrievers, and runnables.

## 💾 Quick Install

```bash
pnpm install @langchain/core
```

## 🤔 What is this?

`@langchain/core` contains the base abstractions that power the rest of the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.

For example, you can install other provider-specific packages like this:

```bash
pnpm install @langchain/openai
```

And use them as follows:

```typescript
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";

const prompt = ChatPromptTemplate.fromTemplate(
  `Answer the following question to the best of your ability:\n{question}`
);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0.8,
});

const outputParser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);

const stream = await chain.stream({
  question: "Why is the sky blue?",
});

for await (const chunk of stream) {
  console.log(chunk);
}

/*
The
 sky
 appears
 blue
 because
 of
 a
 phenomenon
 known
 as
 Ray
leigh
 scattering
*/
```

Note that for compatibility, all used LangChain packages (including the base LangChain package, which itself depends on core!) must share the same version of `@langchain/core`.
This means that you may need to install/resolve a specific version of `@langchain/core` that matches the dependencies of your used packages.

## 📦 Creating your own package

Other LangChain packages should add this package as a dependency and extend the classes within.
For an example, see the [@langchain/anthropic](https://github.com/langchain-ai/langchainjs/tree/main/libs/providers/langchain-anthropic) in this repo.

Because all used packages must share the same version of core, packages should never directly depend on `@langchain/core`. Instead they should have core as a peer dependency and a dev dependency. We suggest using a tilde dependency to allow for different (backwards-compatible) patch versions:

```json
{
  "name": "@langchain/anthropic",
  "version": "0.0.3",
  "description": "Anthropic integrations for LangChain.js",
  "type": "module",
  "author": "LangChain",
  "license": "MIT",
  "dependencies": {
    "@anthropic-ai/sdk": "^0.10.0"
  },
  "peerDependencies": {
    "@langchain/core": "~0.3.0"
  },
  "devDependencies": {
    "@langchain/core": "~0.3.0"
  }
}
```

We suggest making all packages cross-compatible with ESM and CJS using a build step like the one in
[@langchain/anthropic](https://github.com/langchain-ai/langchainjs/tree/main/libs/providers/langchain-anthropic), then running `pnpm build` before running `npm publish`.

## 💁 Contributing

Because `@langchain/core` is a low-level package whose abstractions will change infrequently, most contributions should be made in the higher-level LangChain package.

Bugfixes or suggestions should be made using the same guidelines as the main package.
See [here](https://github.com/langchain-ai/langchainjs/tree/main/CONTRIBUTING.md) for detailed information.

Please report any security issues or concerns following our [security guidelines](https://github.com/langchain-ai/.github/blob/main/SECURITY.md).
- [Text Splitters](/javascript/langchain-textsplitters)
  # 🦜✂️ @langchain/textsplitters

This package contains various implementations of LangChain.js text splitters, most commonly used as part of retrieval-augmented generation (RAG) pipelines.

## Installation

```bash npm2yarn
npm install @langchain/textsplitters @langchain/core
```

## Development

To develop the `@langchain/textsplitters` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/textsplitters
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [MCP Adapters](/javascript/langchain-mcp-adapters)
  # LangChain.js MCP Adapters

[![npm version](https://img.shields.io/npm/v/@langchain/mcp-adapters.svg)](https://www.npmjs.com/package/@langchain/mcp-adapters)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

This library provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain.js](https://github.com/langchain-ai/langchainjs) and [LangGraph.js](https://github.com/langchain-ai/langgraphjs).

## Features

- 🔌 **Transport Options**
  - Connect to MCP servers via stdio (local) or Streamable HTTP (remote)
    - Streamable HTTP automatically falls back to SSE for compatibility with legacy MCP server implementations
  - Support for custom headers in SSE connections for authentication
  - Configurable reconnection strategies for both transport types

- 🔄 **Multi-Server Management**
  - Connect to multiple MCP servers simultaneously
  - Auto-organize tools by server or access them as a flattened collection

- 🧩 **Agent Integration**
  - Compatible with LangChain.js and LangGraph.js
  - Optimized for OpenAI, Anthropic, and Google models
  - Supports rich content responses including text, images, and embedded resources

- 🛠️ **Development Features**
  - Uses `debug` package for debug logging
  - Flexible configuration options
  - Robust error handling

## Installation

```bash
npm install @langchain/mcp-adapters
```

# Example: Connect to one or more servers via `MultiServerMCPClient`

The library allows you to connect to one or more MCP servers and load tools from them, without needing to manage your own MCP client instances.

```ts
import { createAgent } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import { MultiServerMCPClient } from "@langchain/mcp-adapters";

// Create client and connect to server
const client = new MultiServerMCPClient({
  // Global tool configuration options
  // Whether to throw on errors if a tool fails to load (optional, default: true)
  throwOnLoadError: true,
  // Whether to prefix tool names with the server name (optional, default: false)
  prefixToolNameWithServerName: false,
  // Optional additional prefix for tool names (optional, default: "")
  additionalToolNamePrefix: "",

  // Use standardized content block format in tool outputs
  useStandardContentBlocks: true,

  // Behavior when a server fails to connect: "throw" (default) or "ignore"
  onConnectionError: "ignore",

  // Server configuration
  mcpServers: {
    // adds a STDIO connection to a server named "math"
    math: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-math"],
      // Restart configuration for stdio transport
      restart: {
        enabled: true,
        maxAttempts: 3,
        delayMs: 1000,
      },
    },

    // here's a filesystem server
    filesystem: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem"],
    },

    // Sreamable HTTP transport example, with auth headers and automatic SSE fallback disabled (defaults to enabled)
    weather: {
      url: "https://example.com/weather/mcp",
      headers: {
        Authorization: "Bearer token123",
      }
      automaticSSEFallback: false
    },

    // OAuth 2.0 authentication (recommended for secure servers)
    "oauth-protected-server": {
      url: "https://protected.example.com/mcp",
      authProvider: new MyOAuthProvider({
        // Your OAuth provider implementation
        redirectUrl: "https://myapp.com/oauth/callback",
        clientMetadata: {
          redirect_uris: ["https://myapp.com/oauth/callback"],
          client_name: "My MCP Client",
          scope: "mcp:read mcp:write"
        }
      }),
      // Can still include custom headers for non-auth purposes
      headers: {
        "User-Agent": "My-MCP-Client/1.0"
      }
    },

    // how to force SSE, for old servers that are known to only support SSE (streamable HTTP falls back automatically if unsure)
    github: {
      transport: "sse", // also works with "type" field instead of "transport"
      url: "https://example.com/mcp",
      reconnect: {
        enabled: true,
        maxAttempts: 5,
        delayMs: 2000,
      },
    },
  },
});

const tools = await client.getTools();

// Create an OpenAI model
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

// Create the React agent
const agent = createAgent({
  llm: model,
  tools,
});

// Run the agent
try {
  const mathResponse = await agent.invoke({
    messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
  });
  console.log(mathResponse);
} catch (error) {
  console.error("Error during agent execution:", error);
  // Tools throw ToolException for tool-specific errors
  if (error.name === "ToolException") {
    console.error("Tool execution failed:", error.message);
  }
}

await client.close();
```

# Example: Manage the MCP Client yourself

This example shows how you can manage your own MCP client and use it to get LangChain tools. These tools can be used anywhere LangChain tools are used, including with LangGraph prebuilt agents, as shown below.

The example below requires some prerequisites:

```bash
npm install @langchain/mcp-adapters @langchain/langgraph @langchain/core @langchain/openai

export OPENAI_API_KEY=<your_api_key>
```

```ts
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";

import { createAgent } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import { loadMcpTools } from "@langchain/mcp-adapters";

// Initialize the ChatOpenAI model
const model = new ChatOpenAI({ model: "gpt-4" });

// Automatically starts and connects to a MCP reference server
const transport = new StdioClientTransport({
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-math"],
});

// Initialize the client
const client = new Client({
  name: "math-client",
  version: "1.0.0",
});

try {
  // Connect to the transport
  await client.connect(transport);

  // Get tools with custom configuration
  const tools = await loadMcpTools("math", client, {
    // Whether to throw errors if a tool fails to load (optional, default: true)
    throwOnLoadError: true,
    // Whether to prefix tool names with the server name (optional, default: false)
    prefixToolNameWithServerName: false,
    // Optional additional prefix for tool names (optional, default: "")
    additionalToolNamePrefix: "",
    // Use standardized content block format in tool outputs (default: false)
    useStandardContentBlocks: false,
  });

  // Create and run the agent
  const agent = createAgent({ llm: model, tools });
  const agentResponse = await agent.invoke({
    messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
  });
  console.log(agentResponse);
} catch (e) {
  console.error(e);
} finally {
  // Clean up connection
  await client.close();
}
```

For more detailed examples, see the [examples](https://github.com/langchain-ai/langchainjs/tree/2418c6f18771460d5a4da4e6c1e44e4adb5e1705/libs/langchain-mcp-adapters/examples) directory.

## Notifications and Progress

You can subscribe to server notifications and tool progress events directly on the `MultiServerMCPClient` via top‑level callbacks.

```ts
import { MultiServerMCPClient } from "@langchain/mcp-adapters";

const client = new MultiServerMCPClient({
  mcpServers: {
    everything: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-everything"],
    },
  },

  // Receive log/notification messages from the server
  onMessage: (log, source) => {
    console.log(`[${source.server}] ${log.data}`);
  },

  // Receive progress updates (e.g. from long‑running tool calls)
  onProgress: (progress, source) => {
    const pct =
      progress.percentage ??
      (progress.progress != null && progress.total
        ? Math.round((progress.progress / progress.total) * 100)
        : undefined);
    if (pct != null) {
      const origin =
        source.type === "tool" ? `${source.server}/${source.name}` : "unknown";
      console.log(`[progress:${origin}] ${pct}%`);
    }
  },

  // Optional: react to server-side list changes
  onToolsListChanged: (evt, source) => {
    console.log(`[${source.server}] tools changed (${evt.tools?.length ?? 0})`);
  },
});

const tools = await client.getTools();
// ... invoke tools as usual ...
await client.close();
```

Available notification callbacks you can register:

- **onMessage**: server log/diagnostic messages
- **onProgress**: progress events (includes `percentage` or `progress`/`total`) with `source` describing origin (e.g., tool name/server)
- **onInitialized**, **onCancelled**
- **onPromptsListChanged**, **onResourcesListChanged**, **onResourcesUpdated**, **onRootsListChanged**, **onToolsListChanged**

## Tool Hooks (modify args/results)

Use hooks to customize tool calls:

```ts
import { MultiServerMCPClient } from "@langchain/mcp-adapters";

const client = new MultiServerMCPClient({
  mcpServers: {
    math: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-math"],
    },
  },

  // Change args/headers before the tool call
  beforeToolCall: ({ serverName, name, args }) => {
    // Add/override an argument
    const nextArgs = { ...(args as Record<string, unknown>), injected: true };
    // For HTTP/SSE transports, you may also add per-call headers
    return {
      args: nextArgs,
      headers: { "X-Request-ID": crypto.randomUUID() },
    };
  },

  // Change the tool result after execution
  afterToolCall: (res) => {
    // Option A: return a 2‑tuple [content, artifact]
    if (res.name === "someTool") return { result: ["modified-output", []] };

    // Option B: return a LangChain ToolMessage
    // return { result: new ToolMessage({ content: "overridden", tool_call_id: "id" }) };

    // Option C: return a LangGraph Command instance
    // return { result: new Command(...) }

    // Or pass-through (no change)
    return { result: res.result };
  },
});

const tools = await client.getTools();
const t = tools.find((tool) => tool.name.includes("add"));
const out = await t?.invoke({ a: 1, b: 2 });
```

Notes:

- **beforeToolCall** can return `{ args?, headers? }`. Headers are supported for HTTP/SSE. Stdio connections do not support custom headers.
- **afterToolCall** may return either a 2‑tuple `[content, artifact]`, a `ToolMessage`, a `Command` instance, or nothing (to keep the original result).

## Tool Configuration Options

> [!TIP]
> The `useStandardContentBlocks` defaults to `false` for backward compatibility, however we recommend setting it to `true` for new applications, as this will likely become the default in a future release.

When loading MCP tools either directly through `loadMcpTools` or via `MultiServerMCPClient`, you can configure the following options:

| Option                         | Type                                   | Default                                               | Description                                                                                          |
| ------------------------------ | -------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `throwOnLoadError`             | `boolean`                              | `true`                                                | Whether to throw an error if a tool fails to load                                                    |
| `prefixToolNameWithServerName` | `boolean`                              | `false`                                               | If true, prefixes all tool names with the server name (e.g., `serverName__toolName`)                 |
| `additionalToolNamePrefix`     | `string`                               | `""`                                                  | Additional prefix to add to tool names (e.g., `prefix__serverName__toolName`)                        |
| `useStandardContentBlocks`     | `boolean`                              | `false`                                               | See [Tool Output Mapping](#tool-output-mapping); set true for new applications                       |
| `outputHandling`               | `"content"`, `"artifact"`, or `object` | `resource` -> `"artifact"`, all others -> `"content"` | See [Tool Output Mapping](#tool-output-mapping)                                                      |
| `defaultToolTimeout`           | `number`                               | `0`                                                   | Default timeout for all tools (overridable on a per-tool basis)                                      |
| `onConnectionError`            | `"throw"` \| `"ignore"` \| `Function`  | `"throw"`                                             | Behavior when a server fails to connect. See [Connection Error Handling](#connection-error-handling) |

## Tool Output Mapping

> [!TIP]
> This section is important if you are working with multimodal tools, tools that produce embedded resources, or tools that produce large outputs that you may not want to be included in LLM input context. If you are writing a new application that only works with tools that produce simple text or JSON output, we recommend setting `useStandardContentBlocks` to `true` and leaving `outputHandling` undefined (will use defaults).

MCP tools return arrays of content blocks. A content block can contain text, an image, audio, or an embedded resource. The right way to map these outputs into LangChain `ToolMessage` objects can differ based on the needs of your application, which is why we introduced the `useStandardContentBlocks` and `outputHandling` configuration options.

The `useStandardContentBlocks` field determines how individual MCP content blocks are transformed into a structure recognized by LangChain ChatModel providers (e.g. `ChatOpenAI`, `ChatAnthropic`, etc). The `outputHandling` field allows you to specify whether a given type of content should be sent to the LLM, or set aside for some other part of your application to use in some future processing step (e.g. to use a dataframe from a database query in a code execution environment).

### Standardizing the Format of Tool Outputs

In `@langchain/core` version 0.3.48 we created a new set of content block types that offer a standardized structure for multimodal inputs. As you might guess from the name, the `useStandardContentBlocks` setting determines whether `@langchain/mcp-adapters` converts tool outputs to this format. For backward compatibility with older versions of `@langchain/mcp-adapters`, it also determines whether tool message artifacts are converted. See the conversion rules below for more info.

> [!IMPORTANT] > `ToolMessage.content` and `ToolMessage.artifact` will always be arrays of content block objects as described by the rules below, except in one special case. When the `outputHandling` option routes `text` output to the `ToolMessage.content` field and the only content block produced by a tool call is a `text` block, `ToolMessage.content` will be a `string` containing the text content produced by the tool.

**When `useStandardContentBlocks` is `true` (recommended for new applications):**

- **Text**: Returned as [`StandardTextBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardTextBlock.html) objects.
- **Images**: Returned as base64 [`StandardImageBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardImageBlock.html) objects.
- **Audio**: Returned as base64 [`StandardAudioBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardAudioBlock.html) objects.
- **Embedded Resources**: Returned as [`StandardFileBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardFileBlock.html), with a `source_type` of `text` or `base64` depending on whether the resource was binary or text. URI resources are fetched eagerly from the server and the results of the fetch are returned following these same rules. We treat all embedded resource URIs as resolvable by the server, and we do not attempt to fetch external URIs.

**When `useStandardContentBlocks` is `false` (default for backward compatibility):**

- Tool outputs routed to `ToolMessage.artifact` (controlled by the `outputHandling` option):
  - **Embedded Resources**: Embedded resources containing only a URI are fetched eagerly from the server and the results of the fetch operation are stored in the artifact array without transformation. Otherwise embedded resources are stored in the `artifact` array in their original MCP content block structure without modification.
  - **All other content types**: Stored in the `artifact` array in their original MCP content block structure without modification.
- Tool outputs routed to the `ToolMessage.content` array (controlled by the `outputHandling` option):
  - **Text**: Returned as [`MessageContentText`](https://v03.api.js.langchain.com/types/_langchain_core.messages.MessageContentText.html) objects, unless it is the only content block in the output, in which case it's assigned directly to `ToolMessage.content` as a `string`.
  - **Images**: Returned as [`MessageContentImageUrl`](https://v03.api.js.langchain.com/types/_langchain_core.messages.MessageContentImageUrl.html) objects with base64 data URLs (`data:image/png;base64,<data>`)
  - **Audio**: Returned as [`StandardAudioBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardAudioBlock.html) objects.
  - **Embedded Resources**: Returned as [`StandardFileBlock`](https://v03.api.js.langchain.com/types/_langchain_core.messages.StandardFileBlock.html), with a `source_type` of `text` or `base64` depending on whether the resource was binary or text. URI resources are fetched eagerly from the server and the results of the fetch are returned following these same rules. We treat all embedded resource URIs as resolvable by the server, and we do not attempt to fetch external URIs.

### Determining Which Tool Outputs will be Visible to the LLM

The `outputHandling` option allows you to determine which tool output types are assigned to `ToolMessage.content`, and which are assigned to `ToolMessage.artifact`. Data in [`ToolMessage.content`](https://v03.api.js.langchain.com/classes/_langchain_core.messages_tool.ToolMessage.html#content) is used as input context when the LLM is invoked, while [`ToolMessage.artifact`](https://v03.api.js.langchain.com/classes/_langchain_core.messages_tool.ToolMessage.html#artifact) is not.

**By default** `@langchain/mcp-adapters` maps MCP `resource` content blocks to `ToolMessage.artifact`, and maps all other MCP content block types to `ToolMessage.content`. The value of [`useStandardContentBlocks`](#standardizing-the-format-of-tool-outputs) determines how the structure of each content block is transformed during this process.

> [!TIP]
> Examples where `ToolMessage.artifact` can be useful include cases when you need to send multimodal tool outputs via `HumanMessage` or `SystemMessage` because the LLM provider API doesn't accept multimodal tool outputs, or cases where one tool might produce a large output to be indirectly manipulated by some other tool (e.g. a query tool that loads dataframes into a Python code execution environment).

The `outputHandling` option can be assigned to `"content"`, `"artifact"`, or an object that maps MCP content block types to either `content` or `artifact`.

When working with `MultiServerMCPClient`, the `outputHandling` field can be assigned to the top-level config object and/or to individual server entries in `mcpServers`. Entries in `mcpServers` override those in the top-level config, and entries in the top-level config override the defaults.

For example, consider the following configuration:

```typescript
const clientConfig = {
  useStandardContentBlocks: true,
  outputHandling: {
    image: "artifact",
    audio: "artifact",
  },
  mcpServers: {
    camera-server: {
      url: "...",
      outputHandling: {
        image: content
      },
    },
    microphone: {
      url: "...",
      outputHandling: {
        audio: content
      },
    },
  },
}
```

When calling tools from the `camera` MCP server, the following `outputHandling` config will be used:

```typescript
{
  text: "content", // default
  image: "content", // default and top-level config overridden by "camera" server config
  audio: "artifact", // default overridden by top-level config
  resource: "artifact", // default
}
```

Similarly, when calling tools on the `microphone` MCP server, the following `outputHandling` config will be used:

```typescript
{
  text: "content", // default
  image: "artifact", // default overridden by top-level config
  audio: "content", // default and top-level config overridden by "microphone" server config
  resource: "artifact", // default
}
```

## Tool Timeout Configuration

### Using `defaultToolTimeout`

You can configure a global timeout for all tools by setting the `defaultToolTimeout` field in the client params. You can include a `defaultToolTimeout` field in the server config to set the timeout for all tools for that server, or globally for the entire client by setting it in the top-level config.

This timeout will be used as the default timeout for all tools unless overridden by a tool-specific timeout.

```typescript
const client = new MultiServerMCPClient({
  mcpServers: {
    "data-processor": {
      command: "python",
      args: ["data_server.py"],
      defaultToolTimeout: 30000, // timeout will be 30 seconds
    },
    "image-processor": {
      transport: "stdio",
      command: "node",
      args: ["image_server.js"],
      // timeout will be 10 seconds (set in the top-level config)
    },
  },
  defaultToolTimeout: 10000, // 10 seconds
});

const tools = await client.getTools();
const slowTool = tools.find((t) => t.name.includes("process_large_dataset"));

// Will timeout after 30 seconds (defaultToolTimeout)
const result = await slowTool.invoke({ dataset: "huge_file.csv" });
```

### Using `withConfig`

MCP tools support timeout configuration through LangChain's standard `RunnableConfig` interface. This allows you to set custom timeouts on a per-tool-call basis:

```typescript
const client = new MultiServerMCPClient({
  mcpServers: {
    "data-processor": {
      command: "python",
      args: ["data_server.py"],
    },
  },
  useStandardContentBlocks: true,
});

const tools = await client.getTools();
const slowTool = tools.find((t) => t.name.includes("process_large_dataset"));

// You can use withConfig to set tool-specific timeouts before handing
// the tool off to a LangGraph ToolNode or some other part of your
// application
const slowToolWithTimeout = slowTool.withConfig({ timeout: 300000 }); // 5 min timeout

// This invocation will respect the 5 minute timeout
const result = await slowToolWithTimeout.invoke({ dataset: "huge_file.csv" });

// or you can invoke directly without withConfig
const directResult = await slowTool.invoke(
  { dataset: "huge_file.csv" },
  { timeout: 300000 }
);

// Quick timeout for fast operations
const quickResult = await fastTool.invoke(
  { query: "simple_lookup" },
  { timeout: 5000 } // 5 seconds
);

// Default timeout (60 seconds from MCP SDK) when no config provided
const normalResult = await tool.invoke({ input: "normal_processing" });
```

Timeouts can be configured using the following `RunnableConfig` fields:

| Parameter | Type        | Default   | Description                                                   |
| --------- | ----------- | --------- | ------------------------------------------------------------- |
| `timeout` | number      | 60000     | Timeout in milliseconds for the tool call                     |
| `signal`  | AbortSignal | undefined | An AbortSignal that, when asserted, will cancel the tool call |

## OAuth 2.0 Authentication

For secure MCP servers that require OAuth 2.0 authentication, you can use the `authProvider` option instead of manually managing headers. This provides automatic token refresh, error handling, and standards-compliant OAuth flows.

New in v0.4.6.

### Basic OAuth Setup

```ts
import type { OAuthClientProvider } from "@modelcontextprotocol/sdk/client/auth.js";

class MyOAuthProvider implements OAuthClientProvider {
  constructor(
    private config: {
      redirectUrl: string;
      clientMetadata: OAuthClientMetadata;
    }
  ) {}

  get redirectUrl() {
    return this.config.redirectUrl;
  }
  get clientMetadata() {
    return this.config.clientMetadata;
  }

  // Implement token storage (localStorage, database, etc.)
  tokens(): OAuthTokens | undefined {
    const stored = localStorage.getItem("mcp_tokens");
    return stored ? JSON.parse(stored) : undefined;
  }

  async saveTokens(tokens: OAuthTokens): Promise<void> {
    localStorage.setItem("mcp_tokens", JSON.stringify(tokens));
  }

  // Implement other required methods...
  // See MCP SDK documentation for complete examples
}

const client = new MultiServerMCPClient({
  mcpServers: {
    "secure-server": {
      url: "https://secure-mcp-server.example.com/mcp",
      authProvider: new MyOAuthProvider({
        redirectUrl: "https://myapp.com/oauth/callback",
        clientMetadata: {
          redirect_uris: ["https://myapp.com/oauth/callback"],
          client_name: "My MCP Client",
          scope: "mcp:read mcp:write",
        },
      }),
    },
  },
  useStandardContentBlocks: true,
});
```

### OAuth Features

The `authProvider` automatically handles:

- ✅ **Token Refresh**: Automatically refreshes expired access tokens using refresh tokens
- ✅ **401 Error Recovery**: Automatically retries requests after successful authentication
- ✅ **PKCE Security**: Uses Proof Key for Code Exchange for enhanced security
- ✅ **Standards Compliance**: Follows OAuth 2.0 and RFC 6750 specifications
- ✅ **Transport Compatibility**: Works with both StreamableHTTP and SSE transports

### OAuth vs Manual Headers

| Aspect            | OAuth Provider          | Manual Headers                    |
| ----------------- | ----------------------- | --------------------------------- |
| **Token Refresh** | ✅ Automatic            | ❌ Manual implementation required |
| **401 Handling**  | ✅ Automatic retry      | ❌ Manual error handling required |
| **Security**      | ✅ PKCE, secure flows   | ⚠️ Depends on implementation      |
| **Standards**     | ✅ RFC 6750 compliant   | ⚠️ Requires manual compliance     |
| **Complexity**    | ✅ Simple configuration | ❌ Complex implementation         |

**Recommendation**: Use `authProvider` for production OAuth servers, and `headers` only for simple token-based auth or debugging.

## Reconnection Strategies

Both transport types support automatic reconnection:

### Stdio Transport Restart

```ts
{
  transport: "stdio",
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-math"],
  restart: {
    enabled: true,      // Enable automatic restart
    maxAttempts: 3,     // Maximum restart attempts
    delayMs: 1000       // Delay between attempts in ms
  }
}
```

### SSE Transport Reconnect

```ts
{
  transport: "sse",
  url: "https://example.com/mcp-server",
  headers: { "Authorization": "Bearer token123" },
  reconnect: {
    enabled: true,      // Enable automatic reconnection
    maxAttempts: 5,     // Maximum reconnection attempts
    delayMs: 2000       // Delay between attempts in ms
  }
}
```

## Error Handling

The library provides different error types to help with debugging:

- **MCPClientError**: For client connection and initialization issues
- **ToolException**: For errors during tool execution
- **ZodError**: For configuration validation errors (invalid connection settings, etc.)

Example error handling:

```ts
try {
  const client = new MultiServerMCPClient({
    mcpServers: {
      math: {
        transport: "stdio",
        command: "npx",
        args: ["-y", "@modelcontextprotocol/server-math"],
      },
    },
    useStandardContentBlocks: true,
  });

  const tools = await client.getTools();
  const result = await tools[0].invoke({ expression: "1 + 2" });
} catch (error) {
  if (error.name === "MCPClientError") {
    // Handle connection issues
    console.error(`Connection error (${error.serverName}):`, error.message);
  } else if (error.name === "ToolException") {
    // Handle tool execution errors
    console.error("Tool execution failed:", error.message);
  } else if (error.name === "ZodError") {
    // Handle configuration validation errors
    console.error("Configuration error:", error.issues);
    // Zod errors contain detailed information about what went wrong
    error.issues.forEach((issue) => {
      console.error(`- Path: ${issue.path.join(".")}, Error: ${issue.message}`);
    });
  } else {
    // Handle other errors
    console.error("Unexpected error:", error);
  }
}
```

### Common Zod Validation Errors

The library uses Zod for validating configuration. Here are some common validation errors:

- **Missing required parameters**: For example, omitting `command` for stdio transport or `url` for SSE transport
- **Invalid parameter types**: For example, providing a number where a string is expected
- **Invalid connection configuration**: For example, using an invalid URL format for SSE transport

Example Zod error for an invalid SSE URL:

```json
{
  "issues": [
    {
      "code": "invalid_string",
      "validation": "url",
      "path": ["mcpServers", "weather", "url"],
      "message": "Invalid url"
    }
  ],
  "name": "ZodError"
}
```

### Connection Error Handling

By default, the `MultiServerMCPClient` will throw an error if any server fails to connect (`onConnectionError: "throw"`). You can change this behavior by setting `onConnectionError: "ignore"` to skip failed servers, or provide a custom error handler function:

- `"throw"` (default): Throw an error immediately if any server fails to connect
- `"ignore"`: Skip failed servers and continue with successfully connected ones
- `Function`: Custom error handler that receives the server name and error. If the handler throws, the error is bubbled through. If it returns normally, the server is treated as ignored.

When set to `"ignore"` or a custom handler that doesn't throw:

- Servers that fail to connect are skipped and logged as warnings
- The client continues to work with only the servers that successfully connected
- Failed servers are removed from the connection list and won't be retried
- If no servers successfully connect, a warning is logged but no error is thrown

```ts
const client = new MultiServerMCPClient({
  mcpServers: {
    "working-server": {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-math"],
    },
    "broken-server": {
      transport: "http",
      url: "http://localhost:9999/mcp", // This server doesn't exist
    },
  },
  onConnectionError: "ignore", // Skip failed connections
  useStandardContentBlocks: true,
});

// This won't throw even though "broken-server" fails to connect
const tools = await client.getTools(); // Only tools from "working-server"

// You can check which servers are actually connected
const workingClient = await client.getClient("working-server"); // Returns client
const brokenClient = await client.getClient("broken-server"); // Returns undefined
```

You can also provide a custom error handler function for more control:

```ts
const client = new MultiServerMCPClient({
  mcpServers: {
    "critical-server": {
      transport: "http",
      url: "http://localhost:8000/mcp",
    },
    "optional-server": {
      transport: "http",
      url: "http://localhost:8001/mcp",
    },
  },
  onConnectionError: ({ serverName, error }) => {
    // Throw for critical servers, ignore for optional ones
    if (serverName === "critical-server") {
      throw new Error(`Critical server ${serverName} failed: ${error}`);
    }
    // For optional servers, just log and continue
    console.warn(`Optional server ${serverName} failed, continuing...`);
  },
  useStandardContentBlocks: true,
});
```

In this example:

- If `critical-server` fails, the error handler throws and the error is bubbled through
- If `optional-server` fails, the error handler logs a warning and returns normally, so the server is ignored

### Debug Logging

This package makes use of the [debug](https://www.npmjs.com/package/debug) package for debug logging.

Logging is disabled by default, and can be enabled by setting the `DEBUG` environment variable as per
the instructions in the debug package.

To output all debug logs from this package:

```bash
DEBUG='@langchain/mcp-adapters:*'
```

To output debug logs only from the `client` module:

```bash
DEBUG='@langchain/mcp-adapters:client'
```

To output debug logs only from the `tools` module:

```bash
DEBUG='@langchain/mcp-adapters:tools'
```

## License

MIT

## Acknowledgements

Big thanks to [@vrknetha](https://github.com/vrknetha), [@knacklabs](https://www.knacklabs.ai) for the initial implementation!

## Contributing

Contributions are welcome! Please check out our [contributing guidelines](https://github.com/langchain-ai/langchainjs/blob/2418c6f18771460d5a4da4e6c1e44e4adb5e1705/libs/langchain-mcp-adapters/CONTRIBUTING.md) for more information.
- [LangGraph](/javascript/langchain-langgraph)
  # 🦜🕸️LangGraph.js

[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://langchain-ai.github.io/langgraphjs/)
![Version](https://img.shields.io/npm/v/@langchain/langgraph?logo=npm)  
[![Downloads](https://img.shields.io/npm/dm/@langchain/langgraph)](https://www.npmjs.com/package/@langchain/langgraph)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langgraphjs)](https://github.com/langchain-ai/langgraphjs/issues)

> [!NOTE]
> Looking for the Python version? See the [Python repo](https://github.com/langchain-ai/langgraph) and the [Python docs](https://docs.langchain.com/oss/python/langgraph/overview).

LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks.

```bash
npm install @langchain/langgraph @langchain/core
```

To learn more about how to use LangGraph, check out [the docs](https://langchain-ai.github.io/langgraphjs/). We show a simple example below of how to create a ReAct agent.

```ts
// npm install @langchain-anthropic
import { createReactAgent, tool } from "langchain";
import { ChatAnthropic } from "@langchain/anthropic";

import { z } from "zod";

const search = tool(
  async ({ query }) => {
    if (
      query.toLowerCase().includes("sf") ||
      query.toLowerCase().includes("san francisco")
    ) {
      return "It's 60 degrees and foggy.";
    }
    return "It's 90 degrees and sunny.";
  },
  {
    name: "search",
    description: "Call to surf the web.",
    schema: z.object({
      query: z.string().describe("The query to use in your search."),
    }),
  }
);

const model = new ChatAnthropic({
  model: "claude-3-7-sonnet-latest",
});

const agent = createReactAgent({
  llm: model,
  tools: [search],
});

const result = await agent.invoke({
  messages: [
    {
      role: "user",
      content: "what is the weather in sf",
    },
  ],
});
```

## Full-stack Quickstart

Get started quickly by building a full-stack LangGraph application using the [`create-agent-chat-app`](https://www.npmjs.com/package/create-agent-chat-app) CLI:

```bash
npx create-agent-chat-app@latest
```

The CLI sets up a chat interface and helps you configure your application, including:

- 🧠 Choice of 4 prebuilt agents (ReAct, Memory, Research, Retrieval)
- 🌐 Frontend framework (Next.js or Vite)
- 📦 Package manager (`npm`, `yarn`, or `pnpm`)

## Why use LangGraph?

LangGraph is built for developers who want to build powerful, adaptable AI agents. Developers choose LangGraph for:

- **Reliability and controllability.** Steer agent actions with moderation checks and human-in-the-loop approvals. LangGraph persists context for long-running workflows, keeping your agents on course.
- **Low-level and extensible.** Build custom agents with fully descriptive, low-level primitives – free from rigid abstractions that limit customization. Design scalable multi-agent systems, with each agent serving a specific role tailored to your use case.
- **First-class streaming support.** With token-by-token streaming and streaming of intermediate steps, LangGraph gives users clear visibility into agent reasoning and actions as they unfold in real time.

LangGraph is trusted in production and powering agents for companies like:

- [Klarna](https://blog.langchain.dev/customers-klarna/): Customer support bot for 85 million active users
- [Elastic](https://www.elastic.co/blog/elastic-security-generative-ai-features): Security AI assistant for threat detection
- [Uber](https://dpe.org/sessions/ty-smith-adam-huda/this-year-in-ubers-ai-driven-developer-productivity-revolution/): Automated unit test generation
- [Replit](https://www.langchain.com/breakoutagents/replit): Code generation
- And many more ([see list here](https://www.langchain.com/built-with-langgraph))

## LangGraph’s ecosystem

While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:

- [LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/#langgraph-platform) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_studio/).

## Pairing with LangGraph Platform

While LangGraph is our open-source agent orchestration framework, enterprises that need scalable agent deployment can benefit from [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_platform/).

LangGraph Platform can help engineering teams:

- **Accelerate agent development**: Quickly create agent UXs with configurable templates and [LangGraph Studio](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_studio/) for visualizing and debugging agent interactions.
- **Deploy seamlessly**: We handle the complexity of deploying your agent. LangGraph Platform includes robust APIs for memory, threads, and cron jobs plus auto-scaling task queues & servers.
- **Centralize agent management & reusability**: Discover, reuse, and manage agents across the organization. Business users can also modify agents without coding.

## Additional resources

- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [LangChain Academy](https://academy.langchain.com/courses/intro-to-langgraph): Learn the basics of LangGraph in our free, structured course.
- [Tutorials](https://langchain-ai.github.io/langgraphjs/tutorials/): Simple walkthroughs with guided examples on getting started with LangGraph.
- [Templates](https://langchain-ai.github.io/langgraphjs/concepts/template_applications/): Pre-built reference apps for common agentic workflows (e.g. ReAct agent, memory, retrieval etc.) that can be cloned and adapted.
- [How-to Guides](https://langchain-ai.github.io/langgraphjs/how-tos/): Quick, actionable code snippets for topics such as streaming, adding memory & persistence, and design patterns (e.g. branching, subgraphs, etc.).
- [API Reference](https://langchain-ai.github.io/langgraphjs/reference/): Detailed reference on core classes, methods, how to use the graph and checkpointing APIs, and higher-level prebuilt components.
- [Built with LangGraph](https://www.langchain.com/built-with-langgraph): Hear how industry leaders use LangGraph to ship powerful, production-ready AI applications.

## Acknowledgements

LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
- [LangGraph SDK](/javascript/langchain-langgraph-sdk)
  # LangGraph JS/TS SDK

This repository contains the JS/TS SDK for interacting with the LangGraph REST API.

## Quick Start

To get started with the JS/TS SDK, [install the package](https://www.npmjs.com/package/@langchain/langgraph-sdk)

```bash
pnpm add @langchain/langgraph-sdk
```

You will need a running LangGraph API server. If you're running a server locally using `langgraph-cli`, SDK will automatically point at `http://localhost:8123`, otherwise
you would need to specify the server URL when creating a client.

```js
import { Client } from "@langchain/langgraph-sdk";

const client = new Client();

// List all assistants
const assistants = await client.assistants.search({
  metadata: null,
  offset: 0,
  limit: 10,
});

// We auto-create an assistant for each graph you register in config.
const agent = assistants[0];

// Start a new thread
const thread = await client.threads.create();

// Start a streaming run
const messages = [{ role: "human", content: "what's the weather in la" }];

const streamResponse = client.runs.stream(
  thread["thread_id"],
  agent["assistant_id"],
  {
    input: { messages },
  }
);

for await (const chunk of streamResponse) {
  console.log(chunk);
}
```

## Documentation

To generate documentation, run the following commands:

1. Generate docs.

        pnpm typedoc

1. Consolidate doc files into one markdown file.

        npx concat-md --decrease-title-levels --ignore=js_ts_sdk_ref.md --start-title-level-at 2 docs > docs/js_ts_sdk_ref.md

1. Copy `js_ts_sdk_ref.md` to MkDocs directory.

        cp docs/js_ts_sdk_ref.md ../../docs/docs/cloud/reference/sdk/js_ts_sdk_ref.md

## Reference Documentation

The reference documentation is available [here](https://reference.langchain.com/javascript/modules/_langchain_langgraph-sdk.html).

More usage examples can be found [here](https://docs.langchain.com/langsmith/sdk#js).


## Change Log

The change log for new versions can be found [here](https://github.com/langchain-ai/langgraphjs/blob/main/libs/sdk/CHANGELOG.md).
- [LangGraph Checkpoint](/javascript/langchain-langgraph-checkpoint)
  # @langchain/langgraph-checkpoint

This library defines the base interface for [LangGraph.js](https://github.com/langchain-ai/langgraphjs) checkpointers. Checkpointers provide persistence layer for LangGraph. They allow you to interact with and manage the graph's state. When you use a graph with a checkpointer, the checkpointer saves a _checkpoint_ of the graph state at every superstep, enabling several powerful capabilities like human-in-the-loop, "memory" between interactions and more.

## Key concepts

### Checkpoint

Checkpoint is a snapshot of the graph state at a given point in time. Checkpoint tuple refers to an object containing checkpoint and the associated config, metadata and pending writes.

### Thread

Threads enable the checkpointing of multiple different runs, making them essential for multi-tenant chat applications and other scenarios where maintaining separate states is necessary. A thread is a unique ID assigned to a series of checkpoints saved by a checkpointer. When using a checkpointer, you must specify a `thread_id` and optionally `checkpoint_id` when running the graph.

- `thread_id` is simply the ID of a thread. This is always required
- `checkpoint_id` can optionally be passed. This identifier refers to a specific checkpoint within a thread. This can be used to kick of a run of a graph from some point halfway through a thread.

You must pass these when invoking the graph as part of the configurable part of the config, e.g.

```ts
{ configurable: { thread_id: "1" } }  // valid config
{ configurable: { thread_id: "1", checkpoint_id: "0c62ca34-ac19-445d-bbb0-5b4984975b2a" } }  // also valid config
```

### Serde

`@langchain/langgraph-checkpoint` also defines protocol for serialization/deserialization (serde) and provides an default implementation that handles a range of types.

### Pending writes

When a graph node fails mid-execution at a given superstep, LangGraph stores pending checkpoint writes from any other nodes that completed successfully at that superstep, so that whenever we resume graph execution from that superstep we don't re-run the successful nodes.

## Interface

Each checkpointer should conform to `BaseCheckpointSaver` interface and must implement the following methods:

- `.put` - Store a checkpoint with its configuration and metadata.
- `.putWrites` - Store intermediate writes linked to a checkpoint (i.e. pending writes).
- `.getTuple` - Fetch a checkpoint tuple using for a given configuration (`thread_id` and `thread_ts`).
- `.list` - List checkpoints that match a given configuration and filter criteria.

## Usage

```ts
import { MemorySaver } from "@langchain/langgraph-checkpoint";

const writeConfig = {
  configurable: {
    thread_id: "1",
    checkpoint_ns: ""
  }
};
const readConfig = {
  configurable: {
    thread_id: "1"
  }
};

const checkpointer = new MemorySaver();
const checkpoint = {
  v: 1,
  ts: "2024-07-31T2019.804150+00:00",
  id: "1ef4f797-8335-6428-8001-8a1503f9b875",
  channel_values: {
    my_key: "meow",
    node: "node"
  },
  channel_versions: {
    __start__: 2,
    my_key: 3,
    "start:node": 3,
    node: 3
  },
  versions_seen: {
    __input__: {},
    __start__: {
      __start__: 1
    },
    node: {
      "start:node": 2
    }
  },
  pending_sends: [],
}

// store checkpoint
await checkpointer.put(writeConfig, checkpoint, {}, {})

// load checkpoint
await checkpointer.get(readConfig)

// list checkpoints
for await (const checkpoint of checkpointer.list(readConfig)) {
  console.log(checkpoint);
}
```
- [LangGraph Checkpoint MongoDB](/javascript/langchain-langgraph-checkpoint-mongodb)
  # @langchain/langgraph-checkpoint-mongodb

Implementation of a [LangGraph.js](https://github.com/langchain-ai/langgraphjs) CheckpointSaver that uses a MongoDB instance.

## Usage

```ts
import { MongoClient } from "mongodb";
import { MongoDBSaver } from "@langchain/langgraph-checkpoint-mongodb";

const writeConfig = {
  configurable: {
    thread_id: "1",
    checkpoint_ns: ""
  }
};
const readConfig = {
  configurable: {
    thread_id: "1"
  }
};


const client = new MongoClient(process.env.MONGODB_URL);

const checkpointer = new MongoDBSaver({ client });
const checkpoint = {
  v: 1,
  ts: "2024-07-31T2019.804150+00:00",
  id: "1ef4f797-8335-6428-8001-8a1503f9b875",
  channel_values: {
    my_key: "meow",
    node: "node"
  },
  channel_versions: {
    __start__: 2,
    my_key: 3,
    "start:node": 3,
    node: 3
  },
  versions_seen: {
    __input__: {},
    __start__: {
      __start__: 1
    },
    node: {
      "start:node": 2
    }
  },
  pending_sends: [],
}

// store checkpoint
await checkpointer.put(writeConfig, checkpoint, {}, {});

// load checkpoint
await checkpointer.get(readConfig);

// list checkpoints
for await (const checkpoint of checkpointer.list(readConfig)) {
  console.log(checkpoint);
}

await client.close();
```
- [LangGraph Checkpoint Postgres](/javascript/langchain-langgraph-checkpoint-postgres)
  # @langchain/langgraph-checkpoint-postgres

Implementation of a [LangGraph.js](https://github.com/langchain-ai/langgraphjs) CheckpointSaver that uses a Postgres DB.

## Usage

```ts
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";

const writeConfig = {
  configurable: {
    thread_id: "1",
    checkpoint_ns: ""
  }
};
const readConfig = {
  configurable: {
    thread_id: "1"
  }
};

// you can optionally pass a configuration object as the second parameter
const checkpointer = PostgresSaver.fromConnString("postgresql://...", {
  schema: "schema_name" // defaults to "public"
});

// You must call .setup() the first time you use the checkpointer:
await checkpointer.setup();

const checkpoint = {
  v: 1,
  ts: "2024-07-31T2019.804150+00:00",
  id: "1ef4f797-8335-6428-8001-8a1503f9b875",
  channel_values: {
    my_key: "meow",
    node: "node"
  },
  channel_versions: {
    __start__: 2,
    my_key: 3,
    "start:node": 3,
    node: 3
  },
  versions_seen: {
    __input__: {},
    __start__: {
      __start__: 1
    },
    node: {
      "start:node": 2
    }
  },
  pending_sends: [],
}

// store checkpoint
await checkpointer.put(writeConfig, checkpoint, {}, {});

// load checkpoint
await checkpointer.get(readConfig);

// list checkpoints
for await (const checkpoint of checkpointer.list(readConfig)) {
  console.log(checkpoint);
}
```

## Usage with existing connection pool

```ts
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";
import pg from "pg";

// You can use any existing postgres connection pool
// we create a new pool here for the sake of the example
const pool = new pg.Pool({
  connectionString: "postgresql://..."
});

const checkpointer = new PostgresSaver(pool, undefined, {
  schema: "schema_name"
});

await checkpointer.setup();

// ...
```

## Testing

Spin up testing PostgreSQL

```bash
docker-compose up -d && docker-compose logs -f
```

Then use the following connection string to initialize your checkpointer:

```ts
const testCheckpointer = PostgresSaver.fromConnString(
  "postgresql://user:password@localhost:5434/testdb"
);
```
- [LangGraph Checkpoint Redis](/javascript/langchain-langgraph-checkpoint-redis)
  # @langchain/langgraph-checkpoint-redis

Redis checkpoint and store implementation for LangGraph.

## Overview

This package provides Redis-based implementations for:

1. **Checkpoint Savers**: Store and manage LangGraph checkpoints using Redis
    - **RedisSaver**: Standard checkpoint saver that maintains full checkpoint history
    - **ShallowRedisSaver**: Memory-optimized saver that only keeps the latest checkpoint per thread
2. **RedisStore**: Redis-backed key-value store with optional vector search capabilities

## Installation

```bash
npm install @langchain/langgraph-checkpoint-redis
```

## Dependencies

### Redis Requirements

This library requires Redis with the following modules:

- **RedisJSON** - For storing and manipulating JSON data
- **RediSearch** - For search and indexing capabilities

#### Redis 8.0+

If you're using Redis 8.0 or higher, both RedisJSON and RediSearch modules are included by default.

#### Redis < 8.0

For Redis versions lower than 8.0, you'll need to:

- Use [Redis Stack](https://redis.io/docs/stack/), which bundles Redis with these modules
- Or install the modules separately in your Redis instance

## Usage

### Standard Checkpoint Saver

```typescript
import { RedisSaver } from "@langchain/langgraph-checkpoint-redis";

const checkpointer = await RedisSaver.fromUrl(
    "redis://localhost:6379",
    {
        defaultTTL: 60, // TTL in minutes
        refreshOnRead: true
    }
);

// Indices are automatically created by fromUrl()

// Use with your graph
const config = {configurable: {thread_id: "1"}};

// Metadata must include required fields
const metadata = {
    source: "update",  // "update" | "input" | "loop" | "fork"
    step: 0,
    parents: {}
};

await checkpointer.put(config, checkpoint, metadata, {});
const loaded = await checkpointer.get(config);
```

### Shallow Checkpoint Saver

The `ShallowRedisSaver` is a memory-optimized variant that only keeps the latest checkpoint per thread:

```typescript
import { ShallowRedisSaver } from "@langchain/langgraph-checkpoint-redis/shallow";

// Create a shallow saver that only keeps the latest checkpoint
const shallowSaver = await ShallowRedisSaver.fromUrl("redis://localhost:6379");

// Use it the same way as RedisSaver
const config = {
    configurable: {
        thread_id: "my-thread",
        checkpoint_ns: "my-namespace"
    }
};

const metadata = {
    source: "update",
    step: 0,
    parents: {}
};

await shallowSaver.put(config, checkpoint, metadata, versions);

// Only the latest checkpoint is kept - older ones are automatically cleaned up
const latest = await shallowSaver.getTuple(config);
```

Key differences from RedisSaver:

- **Storage**: Only keeps the latest checkpoint per thread (no history)
- **Performance**: Reduced storage usage and faster operations
- **Inline storage**: Channel values are stored inline (no separate blob storage)
- **Automatic cleanup**: Old checkpoints and writes are automatically removed

### RedisStore

The `RedisStore` provides a key-value store with optional vector search capabilities:

```typescript
import { RedisStore } from "@langchain/langgraph-checkpoint-redis/store";

// Basic key-value store
const store = await RedisStore.fromConnString("redis://localhost:6379");

// Store with vector search
const vectorStore = await RedisStore.fromConnString("redis://localhost:6379", {
    index: {
        dims: 1536,  // Embedding dimensions
        embed: embeddings,  // Your embeddings instance
        distanceType: "cosine",  // or "l2", "ip"
        fields: ["text"],  // Fields to embed
    },
    ttl: {
        defaultTTL: 60,  // TTL in minutes
        refreshOnRead: true,
    }
});

// Put and get items
await store.put(["namespace", "nested"], "key1", {text: "Hello world"});
const item = await store.get(["namespace", "nested"], "key1");

// Search with namespace filtering
const results = await store.search(["namespace"], {
    filter: {category: "docs"},
    limit: 10,
});

// Vector search
const semanticResults = await vectorStore.search(["namespace"], {
    query: "semantic search query",
    filter: {type: "article"},
    limit: 5,
});

// Batch operations
const ops = [
    {type: "get", namespace: ["ns"], key: "key1"},
    {type: "put", namespace: ["ns"], key: "key2", value: {data: "value"}},
    {type: "search", namespacePrefix: ["ns"], limit: 10},
    {type: "list_namespaces", matchConditions: [{matchType: "prefix", path: ["ns"]}], limit: 10},
];
const results = await store.batch(ops);
```

## TTL Support

Both checkpoint savers and stores support Time-To-Live (TTL) functionality:

```typescript
const ttlConfig = {
    defaultTTL: 60,  // Default TTL in minutes
    refreshOnRead: true,  // Refresh TTL when items are read
};

const checkpointer = await RedisSaver.fromUrl("redis://localhost:6379", ttlConfig);
```

## Development

### Running Tests

```bash
# Run tests (uses TestContainers)
yarn test

# Run tests in watch mode
yarn test:watch

# Run integration tests
yarn test:int
```

## License

MIT
- [LangGraph Checkpoint SQLite](/javascript/langchain-langgraph-checkpoint-sqlite)
  # @langchain/langgraph-checkpoint-sqlite

Implementation of a [LangGraph.js](https://github.com/langchain-ai/langgraphjs) CheckpointSaver that uses a SQLite DB.

## Usage

```ts
import { SqliteSaver } from "@langchain/langgraph-checkpoint-sqlite";

const writeConfig = {
  configurable: {
    thread_id: "1",
    checkpoint_ns: ""
  }
};
const readConfig = {
  configurable: {
    thread_id: "1"
  }
};

const checkpointer = SqliteSaver.fromConnString("");
const checkpoint = {
  v: 1,
  ts: "2024-07-31T2019.804150+00:00",
  id: "1ef4f797-8335-6428-8001-8a1503f9b875",
  channel_values: {
    my_key: "meow",
    node: "node"
  },
  channel_versions: {
    __start__: 2,
    my_key: 3,
    "start:node": 3,
    node: 3
  },
  versions_seen: {
    __input__: {},
    __start__: {
      __start__: 1
    },
    node: {
      "start:node": 2
    }
  },
  pending_sends: [],
}

// store checkpoint
await checkpointer.put(writeConfig, checkpoint, {}, {})

// load checkpoint
await checkpointer.get(readConfig)

// list checkpoints
for await (const checkpoint of checkpointer.list(readConfig)) {
  console.log(checkpoint);
}
```
- [LangGraph Checkpoint Validation](/javascript/langchain-langgraph-checkpoint-validation)
  # @langchain/langgraph-checkpoint-validation

The checkpointer validation tool is used to validate that custom checkpointer implementations conform to LangGraph's requirements. LangGraph uses [checkpointers](https://langchain-ai.github.io/langgraphjs/concepts/persistence/#checkpointer-libraries) for persisting workflow state, providing the ability to "rewind" your workflow to some earlier point in time, and continue execution from there.

The overall process for using this tool is as follows:

1. Write your custom checkpointer implementation.
2. Add a file to your project that defines a [`CheckpointerTestInitializer`](https://github.com/langchain-ai/langgraphjs/blob/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/checkpoint-validation/src/types.ts) as its default export.
3. Run the checkpointer validation tool to test your checkpointer and determine whether it meets LangGraph's requirements.
4. Iterate on your custom checkpointer as required, until tests pass.

The tool can be executed from the terminal as a CLI, or you can use it as a library to integrate it into your test suite.

## Writing a CheckpointerTestInitializer

The `CheckpointerTestInitializer` interface ([example](https://github.com/langchain-ai/langgraphjs/blob/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/checkpoint-validation/src/tests/postgres_initializer.ts)) is used by the test harness to create instances of your custom checkpointer, and any infrastructure that it requires for testing purposes.

If you intend to execute the tool via the CLI, your `CheckpointerTestInitializer` **must** be the default export of the module in which it is defined.

**Synchronous vs Asynchronous initializer functions**: You may return promises from any functions defined in your `CheckpointerTestInitializer` according to your needs and the test harness will behave accordingly.

**IMPORTANT**: You must take care to write your `CheckpointerTestInitializer` such that instances of your custom checkpointer are isolated from one another with respect to persisted state, or else some tests (particularly the ones that exercise the `list` method) will fail. That is, state written by one instance of your checkpointer MUST NOT be readable by another instance of your checkpointer. That said, there will only ever be one instance of your checkpointer live at any given time, so **you may use shared storage, provided it is cleared when your checkpointer is created or destroyed.** The structure of the `CheckpointerTestInitializer` interface should make this relatively easy to achieve, per the sections below.


### (Required) `checkpointerName`: Define a name for your checkpointer

`CheckpointerTestInitializer` requires you to define a `checkpointerName` field (of type `string`) for use in the test output.

### `beforeAll`: Set up required infrastructure

If your checkpointer requires some external infrastructure to be provisioned, you may wish to provision this via the **optional** `beforeAll` function. This function executes exactly once, at the very start of the testing lifecycle. If defined, it is the first function that will be called from your `CheckpointerTestInitializer`.

**Timeout duration**: If your `beforeAll` function may take longer than 10 seconds to execute, you can assign a custom timeout duration (as milliseconds) to the optional `beforeAllTimeout` field of your `CheckpointerTestInitializer`.

**State isolation note**: Depending on the cost/performance/requirements of your checkpointer infrastructure, it **may** make more sense for you to provision it during the `createCheckpointer` step, so you can provide each checkpointer instance with its own isolated storage backend. However as mentioned above, you may also provision a single shared storage backend, provided you clear any stored data during the `createCheckpointer` or `destroyCheckpointer` step.

### `afterAll`: Tear down required infrastructure

If you set up infrastructure during the `beforeAll` step, you may need to tear it down once the tests complete their execution. You can define this teardown logic in the optional `afterAll` function. Much like `beforeAll` this function will execute exactly one time, after all tests have finished executing.

**IMPORTANT**: If you kill the test runner early this function may not be called. To avoid manual clean-up, give preference to test infrastructure management tools like [TestContainers](https://testcontainers.com/guides/getting-started-with-testcontainers-for-nodejs/), as these tools are designed to detect when this happens and clean up after themselves once the controlling process dies.

### (Required) `createCheckpointer`: Construct your checkpointer

`CheckpointerTestInitializer` requires you to define a `createCheckpointer()` function that returns an instance of your custom checkpointer.

**State isolation note:** If you're provisioning storage during this step, make sure that it is "fresh" storage for each instance of your checkpointer. Otherwise if you are using a shared storage setup, be sure to clear it either in this function, or in the `destroyCheckpointer` function (described in the section below).

### `destroyCheckpointer`: Destroy your checkpointer

If your custom checkpointer requires an explicit teardown step (for example, to clean up database connections), you can define this in the **optional** `destroyCheckpointer(checkpointer: CheckpointerT)` function.

**State isolation note:** If you are using a shared storage setup, be sure to clear it either in this function, or in the `createCheckpointer` function (described in the section above).

## CLI usage

You may use this tool's CLI either via `npx`, `yarn dlx`, or by installing globally and executing it via the `validate-checkpointer` command.

The only required argument to the tool is the import path for your `CheckpointerTestInitializer`. Relative paths must begin with a leading `./` (or `.\`, for Windows), otherwise the path will be interpreted as a module name rather than a relative path.

You may optionally pass one or more test filters as positional arguments after the import path argument (separated by spaces). Valid values are `getTuple`, `list`, `put`, and `putWrites`. If present, only the test suites specified in the filter list will be executed. This is useful for working through smaller sets of test failures as you're validating your checkpointer.

TypeScript imports **are** supported, so you may pass a path directly to your TypeScript source file.

### NPX & Yarn execution

NPX:

```bash
npx @langchain/langgraph-checkpoint-validation ./src/my_initializer.ts
```

Yarn:

```bash
yarn dlx @langchain/langgraph-checkpoint-validation ./src/my_initializer.ts
```

### Global install

NPM:

```bash
npm install -g @langchain/langgraph-checkpoint-validation
validate-checkpointer ./src/my_initializer.ts
```

## Usage in existing Jest-like test suite

This package exports a test definition function that may be used in any Jest-compatible test framework (including Vitest). If you wish to integrate this tooling into your existing test suite, you can simply import and invoke it from within a test file, as shown below.

```ts
import { validate } from "@langchain/langgraph-validation";

validate(MyCheckpointerInitializer);
```
- [LangGraph API](/javascript/langchain-langgraph-api)
  # LangGraph.js API

In-memory implementation of the LangGraph.js API.

## Tests

1. Build the latest code changes to test: `pnpm build`
1. Start a local server: `pnpm dev`
1. Run the tests: `pnpm test`
- [LangGraph CLI](/javascript/langchain-langgraph-cli)
  # LangGraph.js CLI

The official command-line interface for LangGraph.js, providing tools to create, develop, and deploy LangGraph.js applications.

## Installation

The `@langchain/langgraph-cli` is a CLI binary that can be run via `npx` or installed via your package manager of choice:

```bash
npx @langchain/langgraph-cli
```

## Commands

### `langgraphjs dev`

Run LangGraph.js API server in development mode with hot reloading.

```bash
npx @langchain/langgraph-cli dev
```

### `langgraphjs build`

Build a Docker image for your LangGraph.js application.

```bash
npx @langchain/langgraph-cli build
```

### `langgraphjs up`

Run LangGraph.js API server in Docker.

```bash
npx @langchain/langgraph-cli up
```

### `langgraphjs dockerfile`

Generate a Dockerfile for custom deployments

```bash
npx @langchain/langgraph-cli dockerfile <save path>
```

## Configuration

The CLI uses a `langgraph.json` configuration file with these key settings:

```json5
{
  // Required: Graph definitions
  graphs: {
    graph: "./src/graph.ts:graph",
  },

  // Optional: Node version (20 only at the moment)
  node_version: "20",

  // Optional: Environment variables
  env: ".env",

  // Optional: Additional Dockerfile commands
  dockerfile_lines: [],
}
```

See the [full documentation](https://langchain-ai.github.io/langgraph/cloud/reference/cli) for detailed configuration options.
- [LangGraph CUA](/javascript/langchain-langgraph-cua)
  # 🤖 LangGraph.js Computer Use Agent (CUA)

> [!TIP]
> Looking for the Python version? [Check out the repo here](https://github.com/langchain-ai/langgraph-cua-py).

A TypeScript library for creating computer use agent (CUA) systems using [LangGraph.js](https://github.com/langchain-ai/langgraphjs). A CUA is a type of agent which has the ability to interact with a computer to perform tasks.

Short demo video:
<video src="https://github.com/user-attachments/assets/7fd0ab05-fecc-46f5-961b-6624cb254ac2" controls></video>

> [!TIP]
> This demo used the following prompt:
>
> ```
> I want to contribute to the LangGraph.js project. Please find the GitHub repository, and inspect the read me,
> along with some of the issues and open pull requests. Then, report back with a plan of action to contribute.
> ```

This library is built on top of [LangGraph.js](https://github.com/langchain-ai/langgraphjs), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/).

## Installation

You will need to explicitly install LangGraph, Core, and OpenAI since these are peer dependencies of this package.

```bash
yarn add @langchain/langgraph-cua @langchain/langgraph @langchain/core @langchain/openai
```

## Quickstart

This project by default uses [Scrapybara](https://scrapybara.com/) for accessing a virtual machine to run the agent. To use LangGraph CUA, you'll need both OpenAI and Scrapybara API keys.

```bash
export OPENAI_API_KEY=<your_api_key>
export SCRAPYBARA_API_KEY=<your_api_key>
```

Then, create the graph by importing the `createCua` function from the `@langchain/langgraph-cua` module.

```typescript
import "dotenv/config";
import { createCua } from "@langchain/langgraph-cua";

const cuaGraph = createCua();

// Define the input messages
const messages = [
  {
    role: "system",
    content:
      "You're an advanced AI computer use assistant. The browser you are using " +
      "is already initialized, and visiting google.com.",
  },
  {
    role: "user",
    content:
      "I want to contribute to the LangGraph.js project. Please find the GitHub repository, and inspect the read me, " +
      "along with some of the issues and open pull requests. Then, report back with a plan of action to contribute.",
  },
];

async function main() {
  // Stream the graph execution
  const stream = await cuaGraph.stream(
    { messages },
    {
      streamMode: "updates",
      subgraphs: true,
    }
  );

  // Process the stream updates
  for await (const update of stream) {
    console.log(update);
  }

  console.log("Done");
}

main().catch(console.error);
```

The above example will invoke the graph, passing in a request for it to do some research into LangGraph.js from the standpoint of a new contributor. The code will log the stream URL, which you can open in your browser to view the CUA stream.

You can find more examples inside the [`examples` directory](/libs/langgraph-cua/examples).

## How to customize

The `createCua` function accepts a few configuration parameters. These are the same configuration parameters that the graph accepts, along with `recursionLimit`.

You can either pass these parameters when calling `createCua`, or at runtime when invoking the graph by passing them to the `config` object.

### Configuration Parameters

- `scrapybaraApiKey`: The API key to use for Scrapybara. If not provided, it defaults to reading the `SCRAPYBARA_API_KEY` environment variable.
- `timeoutHours`: The number of hours to keep the virtual machine running before it times out.
- `zdrEnabled`: Whether or not Zero Data Retention is enabled in the user's OpenAI account. If `true`, the agent will not pass the `previous_response_id` to the model, and will always pass it the full message history for each request. If `false`, the agent will pass the `previous_response_id` to the model, and only the latest message in the history will be passed. Default `false`.
- `recursionLimit`: The maximum number of recursive calls the agent can make. Default is 100. This is greater than the standard default of 25 in LangGraph, because computer use agents are expected to take more iterations.
- `authStateId`: The ID of the authentication state. If defined, it will be used to authenticate with Scrapybara. Only applies if 'environment' is set to 'web'.
- `environment`: The environment to use. Default is `web`. Options are `web`, `ubuntu`, and `windows`.
- `prompt`: The prompt to pass to the model. This will be passed as the system message.
- `nodeBeforeAction`: A custom node to run before the computer action. This function will receive the current state and config as parameters.
- `nodeAfterAction`: A custom node to run after the computer action. This function will receive the current state and config as parameters.
- `stateModifier`: Optional state modifier for customizing the agent's state.

### System Prompts

Including a system prompt with your CUA graph is recommended, and can save the agent time in its initial steps by providing context into its environment and objective. Below is the recommended system prompt from Scrapybara:

<details><summary>System Prompt</summary>
    
    You have access to an Ubuntu VM with internet connectivity. You can install Ubuntu applications using the bash tool (prefer curl over wget).  

    ### Handling HTML and Large Text Output  
    - To read an HTML file, open it in Chromium using the address bar.  

    ### Interacting with Web Pages and Forms  
    - Zoom out or scroll to ensure all content is visible.  
    - When interacting with input fields:  
    - Clear the field first using `Ctrl+A` and `Delete`.  
    - Take an extra screenshot after pressing "Enter" to confirm the input was submitted correctly.  
    - Move the mouse to the next field after submission.  

    ### Efficiency and Authentication  
    - Computer function calls take time; optimize by stringing together related actions when possible.  
    - You are allowed to take actions on authenticated sites on behalf of the user.  
    - Assume the user has already authenticated if they request access to a site.  
    - For logging into additional sites, ask the user to use Auth Contexts or the Interactive Desktop.  

    ### Handling Black Screens  
    - If the first screenshot shows a black screen:  
    - Click the center of the screen.  
    - Take another screenshot.  

    ### Best Practices  
    - If given a complex task, break it down into smaller steps and ask for details only when necessary.  
    - Read web pages thoroughly by scrolling down until sufficient information is gathered.  
    - Explain each action you take and why.  
    - Avoid asking for confirmation on routine actions (e.g., pressing "Enter" after typing a URL). Seek clarification only for ambiguous or critical actions (e.g., deleting files or submitting sensitive information).  
    - If a user's request implies the need for external information, assume they want you to search for it and provide the answer directly.  

    ### Date Context  
    Today's date is {todays_date}


If you choose to use this prompt, ensure you're populating the `{todays_date}` placeholder with the current date.

</details>

### Node Before/After Action

LangGraph CUA allows you to customize the agent's behavior by providing custom nodes that run before and after computer actions. These nodes give you fine-grained control over the agent's workflow.

```typescript
import { createCua, CUAState, CUAUpdate } from "@langchain/langgraph-cua";
import { LangGraphRunnableConfig } from "@langchain/langgraph";

// Custom node that runs before a computer action
async function customNodeBefore(
  state: CUAState,
  config: LangGraphRunnableConfig
): Promise<CUAUpdate> {
  console.log("Running before computer action");
  // You can modify the state here
  return {};
}

// Custom node that runs after a computer action
async function customNodeAfter(
  state: CUAState,
  config: LangGraphRunnableConfig
): Promise<CUAUpdate> {
  console.log("Running after computer action");
  // You can process the results of the computer action here
  return {};
}

const cuaGraph = createCua({
  nodeBeforeAction: customNodeBefore,
  nodeAfterAction: customNodeAfter,
});
```

These custom nodes allow you to:

- Perform validation or preprocessing before a computer action
- Modify or analyze the results after a computer action
- Implement custom logic that integrates with your application (e.g. for Generative UI)

### State Modifier

The `stateModifier` parameter allows you to customize the agent's state by extending the default state annotation. This gives you the ability to add custom fields to the state object.

```typescript
import { createCua, CUAAnnotation } from "@langchain/langgraph-cua";
import { Annotation } from "@langchain/langgraph";

// Create a custom state annotation that extends the default CUA state
const CustomStateAnnotation = Annotation.Root({
  ...CUAAnnotation.spec,
  // Add your custom fields here
  customField: Annotation.Field({
    default: "default value",
  }),
});

const cuaGraph = createCua({
  stateModifier: CustomStateAnnotation,
});
```

By using state modifiers, you can:

- Store additional context or metadata in the agent's state
- Customize the default behavior of the agent
- Implement domain-specific functionality

### Screenshot Upload

The `uploadScreenshot` parameter allows you to upload screenshots to a storage service (e.g., an image hosting service) and return the URL. This is useful, because storing screenshots in the state object can quickly consume your LangGraph server's disk space.

```typescript
import { createCua } from "@langgraph/cua";

const cuaGraph = createCua({
  uploadScreenshot: async (base64Screenshot) => {
    // Upload screenshot to storage service
    const publicImageUrl = await uploadToS3(base64Screenshot);
    return publicImageUrl;
  },
});
```


## Auth States

LangGraph CUA integrates with Scrapybara's [auth states API](https://docs.scrapybara.com/auth-states) to persist browser authentication sessions. This allows you to authenticate once (e.g., logging into Amazon) and reuse that session in future runs.

### Using Auth States

Pass an `authStateId` when creating your CUA graph:

```typescript
import { createCua } from "@langgraph/cua";

const cuaGraph = createCua({ authStateId: "<your_auth_state_id>" });
```

The graph stores this ID in the `authenticatedId` state field. If you change the `authStateId` in future runs, the graph will automatically reauthenticate.

### Managing Auth States with Scrapybara SDK

#### Save an Auth State

```typescript
import { ScrapybaraClient } from "scrapybara";

const client = new ScrapybaraClient({ apiKey: "<api_key>" });
const instance = await client.get("<instance_id>");
const authStateId = (await instance.saveAuth({ name: "example_site" })).authStateId;
```

#### Modify an Auth State

```typescript
import { ScrapybaraClient } from "scrapybara";

const client = new ScrapybaraClient({ apiKey: "<api_key>" });
const instance = await client.get("<instance_id>");
await instance.modifyAuth({ authStateId: "your_existing_auth_state_id", name: "renamed_auth_state" });
```

> [!NOTE]
> To apply changes to an auth state in an existing run, set the `authenticatedId` state field to `undefined` to trigger re-authentication.

## Zero Data Retention (ZDR)

LangGraph CUA supports Zero Data Retention (ZDR) via the `zdrEnabled` configuration parameter. When set to true, the graph will _not_ assume it can use the `previous_message_id`, and _all_ AI & tool messages will be passed to the OpenAI on each request.

## Development

To get started with development, first clone the repository:

```bash
git clone https://github.com/langchain-ai/langgraphjs.git
```

Install dependencies:

```bash
yarn install
```

Navigate into the `libs/langgraph-cua` directory:

```bash
cd libs/langgraph-cua
```

Set the required environment variables:

```bash
cp .env.example .env
```

Finally, you can then run the integration tests:

```bash
yarn test:single src/tests/cua.int.test.ts
```
- [LangGraph Supervisor](/javascript/langchain-langgraph-supervisor)
  # 🤖 LangGraph Multi-Agent Supervisor

A JavaScript library for creating hierarchical multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraphjs). Hierarchical systems are a type of [multi-agent](https://langchain-ai.github.io/langgraphjs/concepts/multi_agent) architecture where specialized agents are coordinated by a central **supervisor** agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements.

## Features

- 🤖 **Create a supervisor agent** to orchestrate multiple specialized agents
- 🛠️ **Tool-based agent handoff mechanism** for communication between agents
- 📝 **Flexible message history management** for conversation control

This library is built on top of [LangGraph](https://github.com/langchain-ai/langgraphjs), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraphjs/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraphjs/concepts/human_in_the_loop/)

## Installation

```bash
npm install @langchain/langgraph-supervisor @langchain/langgraph @langchain/core
```

## Quickstart

Here's a simple example of a supervisor managing two specialized agents:

![Supervisor Architecture](https://raw.githubusercontent.com/langchain-ai/langgraphjs/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/langgraph-supervisor/static/img/supervisor.png)

```bash
npm install @langchain/langgraph-supervisor @langchain/langgraph @langchain/core @langchain/openai

export OPENAI_API_KEY=<your_api_key>
```

```ts
import { ChatOpenAI } from "@langchain/openai";
import { createSupervisor } from "@langchain/langgraph-supervisor";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const model = new ChatOpenAI({ modelName: "gpt-4o" });

// Create specialized agents
const add = tool(
  async (args) => args.a + args.b,
  {
    name: "add",
    description: "Add two numbers.",
    schema: z.object({
      a: z.number(),
      b: z.number()
    })
  }
);

const multiply = tool(
  async (args) => args.a * args.b,
  {
    name: "multiply", 
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number(),
      b: z.number()
    })
  }
);

const webSearch = tool(
  async (args) => {
    return (
      "Here are the headcounts for each of the FAANG companies in 2024:\n" +
      "1. **Facebook (Meta)**: 67,317 employees.\n" +
      "2. **Apple**: 164,000 employees.\n" +
      "3. **Amazon**: 1,551,000 employees.\n" +
      "4. **Netflix**: 14,000 employees.\n" +
      "5. **Google (Alphabet)**: 181,269 employees."
    );
  },
  {
    name: "web_search",
    description: "Search the web for information.",
    schema: z.object({
      query: z.string()
    })
  }
);

const mathAgent = createReactAgent({
  llm: model,
  tools: [add, multiply],
  name: "math_expert",
  prompt: "You are a math expert. Always use one tool at a time."
});

const researchAgent = createReactAgent({
  llm: model,
  tools: [webSearch],
  name: "research_expert",
  prompt: "You are a world class researcher with access to web search. Do not do any math."
});

// Create supervisor workflow
const workflow = createSupervisor({
  agents: [researchAgent, mathAgent],
  llm: model,
  prompt: 
    "You are a team supervisor managing a research expert and a math expert. " +
    "For current events, use research_agent. " +
    "For math problems, use math_agent."
});

// Compile and run
const app = workflow.compile();
const result = await app.invoke({
  messages: [
    {
      role: "user",
      content: "what's the combined headcount of the FAANG companies in 2024??"
    }
  ]
});
```

## Message History Management

You can control how agent messages are added to the overall conversation history of the multi-agent system:

Include full message history from an agent:

![Full History](https://raw.githubusercontent.com/langchain-ai/langgraphjs/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/langgraph-supervisor/static/img/full_history.png)

```ts
const workflow = createSupervisor({
  agents: [agent1, agent2],
  outputMode: "full_history"
})
```

Include only the final agent response:

![Last Message](https://raw.githubusercontent.com/langchain-ai/langgraphjs/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/langgraph-supervisor/static/img/last_message.png)

```ts
const workflow = createSupervisor({
  agents: [agent1, agent2],
  outputMode: "last_message"
})
```

## Multi-level Hierarchies

You can create multi-level hierarchical systems by creating a supervisor that manages multiple supervisors.

```ts
const researchTeam = createSupervisor({
  agents: [researchAgent, mathAgent],
  llm: model,
}).compile({ name: "research_team" })

const writingTeam = createSupervisor({
  agents: [writingAgent, publishingAgent],
  llm: model,
}).compile({ name: "writing_team" })

const topLevelSupervisor = createSupervisor({
  agents: [researchTeam, writingTeam],
  llm: model,
}).compile({ name: "top_level_supervisor" })
```

## Adding Memory

You can add [short-term](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraphjs/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/) to your supervisor multi-agent system. Since `createSupervisor()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint.BaseCheckpointSaver.html) or a [store](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint.BaseStore.html) instance to the `.compile()` method:

```ts
import { MemorySaver, InMemoryStore } from "@langchain/langgraph";

const checkpointer = new MemorySaver()
const store = new InMemoryStore()

const model = ...
const researchAgent = ...
const mathAgent = ...

const workflow = createSupervisor({
  agents: [researchAgent, mathAgent],
  llm: model,
  prompt: "You are a team supervisor managing a research expert and a math expert.",
})

// Compile with checkpointer/store
const app = workflow.compile({
  checkpointer,
  store
})
```
- [LangGraph Swarm](/javascript/langchain-langgraph-swarm)
  # 🤖 LangGraph Multi-Agent Swarm 

A JavaScript library for creating swarm-style multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraphjs). A swarm is a type of [multi-agent](https://langchain-ai.github.io/langgraphjs/concepts/multi_agent) architecture where agents dynamically hand off control to one another based on their specializations. The system remembers which agent was last active, ensuring that on subsequent interactions, the conversation resumes with that agent.

![Swarm](https://raw.githubusercontent.com/langchain-ai/langgraphjs/e493f1053f37e6171ef8ccb2e58f2e79e19ddf36/libs/langgraph-swarm/static/img/swarm.png)

> [!NOTE]
> This library has been updated to support LangChain 1.0. However, it has **not** been tested with the new agents in `langchain`. The library currently only supports the prebuilt `createReactAgent` from LangGraph. This update allows users to migrate to LangChain 1.0 without changing their existing code. For users of the swarm package, we recommend continuing to use `createReactAgent` rather than the new `createAgent` pattern from LangChain for now.

## Features

- 🤖 **Multi-agent collaboration** - Enable specialized agents to work together and hand off context to each other
- 🛠️ **Customizable handoff tools** - Built-in tools for communication between agents

This library is built on top of [LangGraph](https://github.com/langchain-ai/langgraphjs), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraphjs/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraphjs/concepts/human_in_the_loop/)

## Installation

```bash
npm install @langchain/langgraph-swarm @langchain/langgraph @langchain/core
```

## Quickstart

```bash
npm install @langchain/langgraph-swarm @langchain/langgraph @langchain/core @langchain/openai

export OPENAI_API_KEY=<your_api_key>
```

```ts
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { tool, createAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { createSwarm, createHandoffTool } from "@langchain/langgraph-swarm";

const model = new ChatOpenAI({ modelName: "gpt-4o" });

// Create specialized tools
const add = tool(
  async (args) => args.a + args.b,
  {
    name: "add",
    description: "Add two numbers.",
    schema: z.object({
      a: z.number(),
      b: z.number()
    })
  }
);

// Create agents with handoff tools
const alice = createAgent({
  llm: model,
  tools: [add, createHandoffTool({ agentName: "Bob" })],
  name: "Alice",
  prompt: "You are Alice, an addition expert."
});

const bob = createAgent({
  llm: model,
  tools: [createHandoffTool({ 
    agentName: "Alice", 
    description: "Transfer to Alice, she can help with math" 
  })],
  name: "Bob",
  prompt: "You are Bob, you speak like a pirate."
});

// Create swarm workflow
const checkpointer = new MemorySaver();
const workflow = createSwarm({
  agents: [alice, bob],
  defaultActiveAgent: "Alice"
});

export const app = workflow.compile({ 
  checkpointer 
});

const config = { configurable: { thread_id: "1" } };
const turn1 = await app.invoke(
  { messages: [{ role: "user", content: "i'd like to speak to Bob" }] },
  config
);
console.log(turn1);

const turn2 = await app.invoke(
  { messages: [{ role: "user", content: "what's 5 + 7?" }] },
  config
);
console.log(turn2);
```

## Memory

You can add [short-term](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraphjs/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/) to your swarm multi-agent system. Since `createSwarm()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint.BaseCheckpointSaver.html) or a [store](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint.BaseStore.html) instance to the `.compile()` method:

```ts
import { MemorySaver, InMemoryStore } from "@langchain/langgraph";

// short-term memory
const checkpointer = new MemorySaver()
// long-term memory
const store = new InMemoryStore()

alice = ...
bob = ...

const workflow = createSwarm({
  agents: [alice, bob],
  defaultActiveAgent: "Alice",
})

// Compile with checkpointer/store
const app = workflow.compile({
  checkpointer,
  store
})
```

> [!IMPORTANT]
> Adding [short-term memory](https://langchain-ai.github.io/langgraphjs/concepts/persistence/) is crucial for maintaining conversation state across multiple interactions. Without it, the swarm would "forget" which agent was last active and lose the conversation history. Make sure to always compile the swarm with a checkpointer if you plan to use it in multi-turn conversations; e.g., `workflow.compile(checkpointer=checkpointer)`.

## How to customize

You can customize multi-agent swarm by changing either the [handoff tools](#customizing-handoff-tools) implementation or the [agent implementation](#customizing-agent-implementation).

### Customizing handoff tools

By default, the agents in the swarm are assumed to use handoff tools created with the prebuilt `createHandoffTool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:

* change tool name and/or description
* add tool call arguments for the LLM to populate, for example a task description for the next agent
* change what data is passed to the next agent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the swarm up to this point), as well as the contents of `Command.update` to the next agent

> [!IMPORTANT]
> If you want to change what messages are passed to the next agent, you **must** use a different state schema key for `messages` in your agent implementation (e.g., `alice_messages`). By default, all agent (subgraph) state updates are applied to the swarm (parent) graph state during the handoff. Since all of the agents by default are assumed to communicate over a single `messages` key, this means that the agent's messages are **automatically combined** into the parent graph's `messages`, unless an agent uses a different key for `messages`. See more on this in the [customizing agent implementation](#customizing-agent-implementation) section.

Here is an example of what a custom handoff tool might look like:

```ts
import { z } from "zod";
import { BaseMessage, ToolMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { Command, getCurrentTaskInput } from "@langchain/langgraph";

const createCustomHandoffTool = ({
  agentName,
  toolName,
  toolDescription,
}: {
  agentName: string;
  toolName: string;
  toolDescription: string;
}) => {

  const handoffTool = tool(
    async (args, config) => {
      const toolMessage = new ToolMessage({
        content: `Successfully transferred to ${agentName}`,
        name: toolName,
        tool_call_id: config.toolCall.id,
      });

      // you can use a different messages state key here, if your agent uses a different schema
      // e.g., "alice_messages" instead of "messages"
      // see this how-to guide for more details: 
      // https://langchain-ai.github.io/langgraphjs/how-tos/pass-run-time-values-to-tools/
      const { messages } = (getCurrentTaskInput() as { messages: BaseMessage[] });
      const lastAgentMessage = messages[messages.length - 1];
      return new Command({
        goto: agentName,
        graph: Command.PARENT,
        // NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
        update: {
          messages: [lastAgentMessage, toolMessage],
          activeAgent: agentName,
          // optionally pass the task description to the next agent
          taskDescription: args.taskDescription,
        },
      });
    },
    {
      name: toolName,
      schema: z.object({
        // you can add additional tool call arguments for the LLM to populate
        // for example, you can ask the LLM to populate a task description for the next agent
        taskDescription: z.string().describe("Detailed description of what the next agent should do, including all of the relevant context")
      }),
      description: toolDescription,
    }
  );

  return handoffTool;
}
```

> [!IMPORTANT]
> If you are implementing custom handoff tools that return `Command`, you need to ensure that:  
  (1) your agent has a tool-calling node that can handle tools returning `Command` (like LangGraph's prebuilt [`ToolNode`](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph_prebuilt.ToolNode.html))
  (2) both the swarm graph and the next agent graph have the [state schema](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#annotation) containing the keys you want to update in `Command.update`

### Customizing agent implementation

By default, individual agents are expected to communicate over a single `messages` key that is shared by all agents and the overall multi-agent swarm graph. This means that **all** of the messages from **all** of the agents will be combined into a single, shared list of messages. This might not be desirable if you don't want to expose an agent's internal history of messages. To change this, you can customize the agent by taking the following steps:

1.  use custom [state schema](https://langchain-ai.github.io/langgraphjs/concepts/low_level#annotation) with a different key for messages, for example `alice_messages`
1.  write a wrapper that converts the parent graph state to the child agent state and back (see this [how-to](https://langchain-ai.github.io/langgraphjs/how-tos/subgraph-transform-state/) guide)

```ts
import { BaseMessage } from "@langchain/core/messages";
import { Annotation, StateGraph, messagesStateReducer } from "@langchain/langgraph";

export const AliceStateAnnotation = Annotation.Root({
  alice_messages: Annotation<BaseMessage[]>({
    reducer: messagesStateReducer,
    default: () => [],
  }),
});
import { SwarmState } from "@langchain/langgraph-swarm";

// see this guide to learn how you can implement a custom tool-calling agent
// https://langchain-ai.github.io/langgraphjs/how-tos/react-agent-from-scratch/
const alice = (
    new StateGraph(AliceStateAnnotation)
    .addNode("model", ...)
    .addNode("tools", ...)
    .addEdge(...)
    ...
    .compile()
)

// wrapper calling the agent
const callAlice = async (state: typeof SwarmState.State) => {
    // you can put any input transformation from parent state -> agent state
    // for example, you can invoke "alice" with "task_description" populated by the LLM
    const response = await alice.invoke({"alice_messages": state["messages"]})
    // you can put any output transformation from agent state -> parent state
    return { "messages": response.alice_messages }
}

const callBob = async (state: typeof SwarmState.State) => {
    ...
}
```

Then, you can create the swarm manually in the following way:

```ts
import { addActiveAgentRouter } from "@langchain/langgraph-swarm";

let workflow = (
    new StateGraph(SwarmState)
    .addNode("Alice", callAlice, { ends: ["Bob"] })
    .addNode("Bob", callBob, { ends: ["Alice"] })
)
// this is the router that enables us to keep track of the last active agent
workflow = addActiveAgentRouter(workflow, {
    routeTo: ["Alice", "Bob"],
    defaultActiveAgent: "Alice",
})

// compile the workflow
const app = workflow.compile()
```
- [Deep Agents](/javascript/deepagents)
  <div align="center">
  <a href="https://docs.langchain.com/oss/python/deepagents/overview#deep-agents-overview">
    <picture>
      <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/.github/images/logo-light.svg">
      <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/.github/images/logo-dark.svg">
      <img alt="Deep Agents Logo" src="https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/.github/images/logo-dark.svg" width="50%">
    </picture>
  </a>
</div>

<div align="center">
  <h3>The batteries-included agent harness.</h3>
</div>

<div align="center">
  <a href="https://www.npmjs.com/package/deepagents"><img src="https://img.shields.io/npm/v/deepagents.svg" alt="npm version"></a>
  <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
  <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-5.0+-blue.svg" alt="TypeScript"></a>
  <a href="https://x.com/LangChain_JS" target="_blank"><img src="https://img.shields.io/twitter/url/https/twitter.com/LangChain_JS.svg?style=social&label=Follow%20%40LangChain_JS" alt="Twitter / X"></a>
</div>

Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.

Applications like "Deep Research", "Manus", and "Claude Code" have gotten around this limitation by implementing a combination of four things:
a **planning tool**, **sub agents**, access to a **file system**, and a **detailed prompt**.

`deepagents` is a TypeScript package that implements these in a general purpose way so that you can easily create a Deep Agent for your application.

> 💡 **Tip:** Looking for the Python version of this package? See [langchain-ai/deepagents](https://github.com/langchain-ai/deepagents)

<div align="center">

[Documentation](https://docs.langchain.com/oss/javascript/deepagents/overview) | [Examples](https://github.com/langchain-ai/deepagentsjs/tree/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/examples) | [Report Bug](https://github.com/langchain-ai/deepagentsjs/issues) | [Request Feature](https://github.com/langchain-ai/deepagentsjs/issues)

</div>

## 📖 Overview

Using an LLM to call tools in a loop is the simplest form of an agent. However, this architecture can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.

Applications like **Deep Research**, **Manus**, and **Claude Code** have overcome this limitation by implementing a combination of four key components:

1. **Planning Tool** - Strategic task decomposition
2. **Sub-Agents** - Specialized agents for subtasks
3. **File System Access** - Persistent state and memory
4. **Detailed Prompts** - Context-rich instructions

**Deep Agents** is a TypeScript package that implements these patterns in a general-purpose way, enabling you to easily create sophisticated agents for your applications.

## ✨ Features

- 🎯 **Task Planning & Decomposition** - Break complex tasks into manageable steps
- 🤖 **Sub-Agent Architecture** - Delegate specialized work to focused agents
- 💾 **File System Integration** - Persistent memory and state management
- 🌊 **Streaming Support** - Real-time updates, token streaming, and progress tracking
- 🔄 **LangGraph Powered** - Built on the robust LangGraph framework
- 📝 **TypeScript First** - Full type safety and IntelliSense support
- 🔌 **Extensible** - Easy to customize and extend for your use case

## Installation

```bash
# npm
npm install deepagents

# yarn
yarn add deepagents

# pnpm
pnpm add deepagents
```

## Usage

(To run the example below, you will need to `npm install @langchain/tavily`).

Make sure to set `TAVILY_API_KEY` in your environment. You can generate one [here](https://www.tavily.com/).

```typescript
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";

// Web search tool
const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string().describe("The search query"),
      maxResults: z
        .number()
        .optional()
        .default(5)
        .describe("Maximum number of results to return"),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general")
        .describe("Search topic category"),
      includeRawContent: z
        .boolean()
        .optional()
        .default(false)
        .describe("Whether to include raw content"),
    }),
  },
);

// System prompt to steer the agent to be an expert researcher
const researchInstructions = `You are an expert researcher. Your job is to conduct thorough research, and then write a polished report.

You have access to an internet search tool as your primary means of gathering information.

> [!TIP]
> For developing, debugging, and deploying AI agents and LLM applications, see [LangSmith](https://docs.langchain.com/langsmith/home).

## \`internet_search\`

Use this to run an internet search for a given query. You can specify the max number of results to return, the topic, and whether raw content should be included.
`;

// Create the deep agent
const agent = createDeepAgent({
  tools: [internetSearch],
  systemPrompt: researchInstructions,
});

// Invoke the agent
const result = await agent.invoke({
  messages: [{ role: "user", content: "What is langgraph?" }],
});
```

See [examples/research/research-agent.ts](https://github.com/langchain-ai/deepagentsjs/blob/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/examples/research/research-agent.ts) for a more complex example.

The agent created with `createDeepAgent` is just a LangGraph graph - so you can interact with it (streaming, human-in-the-loop, memory, studio)
in the same way you would any LangGraph agent.

## Core Capabilities

**Planning & Task Decomposition**

Deep Agents include a built-in `write_todos` tool that enables agents to break down complex tasks into discrete steps, track progress, and adapt plans as new information emerges.

**Context Management**

File system tools (`ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`) allow agents to offload large context to memory, preventing context window overflow and enabling work with variable-length tool results.

**Subagent Spawning**

A built-in `task` tool enables agents to spawn specialized subagents for context isolation. This keeps the main agent's context clean while still going deep on specific subtasks.

**Long-term Memory**

Extend agents with persistent memory across threads using LangGraph's Store. Agents can save and retrieve information from previous conversations.

## Customizing Deep Agents

There are several parameters you can pass to `createDeepAgent` to create your own custom deep agent.

### `model`

By default, `deepagents` uses `"claude-sonnet-4-5-20250929"`. You can customize this by passing any [LangChain model object](https://js.langchain.com/docs/integrations/chat/).

```typescript
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatOpenAI } from "@langchain/openai";
import { createDeepAgent } from "deepagents";

// Using Anthropic
const agent = createDeepAgent({
  model: new ChatAnthropic({
    model: "claude-sonnet-4-20250514",
    temperature: 0,
  }),
});

// Using OpenAI
const agent2 = createDeepAgent({
  model: new ChatOpenAI({
    model: "gpt-5",
    temperature: 0,
  }),
});
```

### `systemPrompt`

Deep Agents come with a built-in system prompt. This is relatively detailed prompt that is heavily based on and inspired by [attempts](https://github.com/kn1026/cc/blob/main/claudecode.md) to [replicate](https://github.com/asgeirtj/system_prompts_leaks/blob/main/Anthropic/claude-code.md)
Claude Code's system prompt. It was made more general purpose than Claude Code's system prompt. The default prompt contains detailed instructions for how to use the built-in planning tool, file system tools, and sub agents.

Each deep agent tailored to a use case should include a custom system prompt specific to that use case as well. The importance of prompting for creating a successful deep agent cannot be overstated.

```typescript
import { createDeepAgent } from "deepagents";

const researchInstructions = `You are an expert researcher. Your job is to conduct thorough research, and then write a polished report.`;

const agent = createDeepAgent({
  systemPrompt: researchInstructions,
});
```

### `tools`

Just like with tool-calling agents, you can provide a deep agent with a set of tools that it has access to.

```typescript
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";

const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string().describe("The search query"),
      maxResults: z.number().optional().default(5),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general"),
      includeRawContent: z.boolean().optional().default(false),
    }),
  },
);

const agent = createDeepAgent({
  tools: [internetSearch],
});
```

### `middleware`

`createDeepAgent` is implemented with middleware that can be customized. You can provide additional middleware to extend functionality, add tools, or implement custom hooks.

```typescript
import { tool } from "langchain";
import { createDeepAgent } from "deepagents";
import type { AgentMiddleware } from "langchain";
import { z } from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `The weather in ${city} is sunny.`;
  },
  {
    name: "get_weather",
    description: "Get the weather in a city.",
    schema: z.object({
      city: z.string().describe("The city to get weather for"),
    }),
  },
);

const getTemperature = tool(
  async ({ city }: { city: string }) => {
    return `The temperature in ${city} is 70 degrees Fahrenheit.`;
  },
  {
    name: "get_temperature",
    description: "Get the temperature in a city.",
    schema: z.object({
      city: z.string().describe("The city to get temperature for"),
    }),
  },
);

class WeatherMiddleware implements AgentMiddleware {
  tools = [getWeather, getTemperature];
}

const agent = createDeepAgent({
  model: "claude-sonnet-4-20250514",
  middleware: [new WeatherMiddleware()],
});
```

### `subagents`

A main feature of Deep Agents is their ability to spawn subagents. You can specify custom subagents that your agent can hand off work to in the subagents parameter. Sub agents are useful for context quarantine (to help not pollute the overall context of the main agent) as well as custom instructions.

`subagents` should be a list of objects that follow the `SubAgent` interface:

```typescript
interface SubAgent {
  name: string;
  description: string;
  systemPrompt: string;
  tools?: StructuredTool[];
  model?: LanguageModelLike | string;
  middleware?: AgentMiddleware[];
  interruptOn?: Record<string, boolean | InterruptOnConfig>;
  skills?: string[];
}
```

**SubAgent fields:**

- **name**: This is the name of the subagent, and how the main agent will call the subagent
- **description**: This is the description of the subagent that is shown to the main agent
- **systemPrompt**: This is the prompt used for the subagent
- **tools**: This is the list of tools that the subagent has access to.
- **model**: Optional model name or model instance.
- **middleware**: Additional middleware to attach to the subagent. See [here](https://docs.langchain.com/oss/typescript/langchain/middleware) for an introduction into middleware and how it works with createAgent.
- **interruptOn**: A custom interrupt config that specifies human-in-the-loop interactions for your tools.
- **skills**: Skill source paths for the subagent (e.g., `["/skills/research/"]`). See skills inheritance below.

#### Skills Inheritance

When you configure `skills` on the main agent via `createDeepAgent`, the behavior differs between subagent types:

- **General-purpose subagent**: Automatically inherits skills from the main agent. This subagent has access to all the same skills as the main agent.
- **Custom subagents**: Do NOT inherit skills from the main agent by default. If you want a custom subagent to have access to skills, you must explicitly define the `skills` property on that subagent.

```typescript
const agent = createDeepAgent({
  model: "claude-sonnet-4-20250514",
  skills: ["/skills/"], // Main agent and general-purpose subagent get these skills
  subagents: [
    {
      name: "researcher",
      description: "Research assistant",
      systemPrompt: "You are a researcher.",
      // This subagent will NOT have access to /skills/ from the main agent
    },
    {
      name: "coder",
      description: "Coding assistant",
      systemPrompt: "You are a coder.",
      skills: ["/skills/coding/"], // This subagent has its own skills
    },
  ],
});
```

This design ensures context isolation - custom subagents only have access to the skills they explicitly need, preventing unintended skill leakage between specialized agents.

#### Using SubAgent

```typescript
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent, type SubAgent } from "deepagents";
import { z } from "zod";

const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = "general",
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: "general" | "news" | "finance";
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: "internet_search",
    description: "Run a web search",
    schema: z.object({
      query: z.string(),
      maxResults: z.number().optional().default(5),
      topic: z
        .enum(["general", "news", "finance"])
        .optional()
        .default("general"),
      includeRawContent: z.boolean().optional().default(false),
    }),
  },
);

const researchSubagent: SubAgent = {
  name: "research-agent",
  description: "Used to research more in depth questions",
  systemPrompt: "You are a great researcher",
  tools: [internetSearch],
  model: "gpt-4o", // Optional override, defaults to main agent model
};

const subagents = [researchSubagent];

const agent = createDeepAgent({
  model: "claude-sonnet-4-20250514",
  subagents: subagents,
});
```

### `interruptOn`

A common reality for agents is that some tool operations may be sensitive and require human approval before execution. Deep Agents supports human-in-the-loop workflows through LangGraph's interrupt capabilities. You can configure which tools require approval using a checkpointer.

These tool configs are passed to our prebuilt [HITL middleware](https://docs.langchain.com/oss/typescript/langchain/middleware#human-in-the-loop) so that the agent pauses execution and waits for feedback from the user before executing configured tools.

```typescript
import { tool } from "langchain";
import { createDeepAgent } from "deepagents";
import { z } from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `The weather in ${city} is sunny.`;
  },
  {
    name: "get_weather",
    description: "Get the weather in a city.",
    schema: z.object({
      city: z.string(),
    }),
  },
);

const agent = createDeepAgent({
  model: "claude-sonnet-4-20250514",
  tools: [getWeather],
  interruptOn: {
    get_weather: {
      allowedDecisions: ["approve", "edit", "reject"],
    },
  },
});
```

### `backend`

Deep Agents use backends to manage file system operations and memory storage. You can configure different backends depending on your needs:

```typescript
import {
  createDeepAgent,
  StateBackend,
  StoreBackend,
  FilesystemBackend,
  LocalShellBackend,
  CompositeBackend,
} from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
import { InMemoryStore } from "@langchain/langgraph-checkpoint";

// Default: StateBackend (in-memory, ephemeral)
const agent1 = createDeepAgent({
  // No backend specified - uses StateBackend by default
});

// StoreBackend: Persistent storage using LangGraph Store
const agent2 = createDeepAgent({
  backend: (config) => new StoreBackend(config),
  store: new InMemoryStore(), // Provide a store
  checkpointer: new MemorySaver(), // Optional: for conversation persistence
});

// FilesystemBackend: Store files on actual filesystem
const agent3 = createDeepAgent({
  backend: (config) => new FilesystemBackend({ rootDir: "./agent-workspace" }),
});

// LocalShellBackend: Filesystem access + local shell command execution
const agent4 = createDeepAgent({
  backend: new LocalShellBackend({
    rootDir: "./agent-workspace",
    inheritEnv: true,
  }),
});

// CompositeBackend: Combine multiple backends
const agent5 = createDeepAgent({
  backend: (config) =>
    new CompositeBackend({
      state: new StateBackend(config),
      store: config.store ? new StoreBackend(config) : undefined,
    }),
  store: new InMemoryStore(),
  checkpointer: new MemorySaver(),
});
```

See [examples/backends/](https://github.com/langchain-ai/deepagentsjs/tree/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/examples/backends/) for detailed examples of each backend type.

### Sandbox Execution

For agents that need to run shell commands, you can create a sandbox backend by extending `BaseSandbox`. This enables the `execute` tool which allows agents to run arbitrary shell commands in an isolated environment.

```typescript
import {
  createDeepAgent,
  BaseSandbox,
  type ExecuteResponse,
  type FileUploadResponse,
  type FileDownloadResponse,
} from "deepagents";
import { spawn } from "child_process";

// Create a concrete sandbox by extending BaseSandbox
class LocalShellSandbox extends BaseSandbox {
  readonly id = "local-shell";
  private readonly workingDirectory: string;

  constructor(workingDirectory: string) {
    super();
    this.workingDirectory = workingDirectory;
  }

  // Only execute() is required - BaseSandbox implements all file operations
  async execute(command: string): Promise<ExecuteResponse> {
    return new Promise((resolve) => {
      const child = spawn("/bin/bash", ["-c", command], {
        cwd: this.workingDirectory,
      });

      const chunks: string[] = [];
      child.stdout.on("data", (data) => chunks.push(data.toString()));
      child.stderr.on("data", (data) => chunks.push(data.toString()));

      child.on("close", (exitCode) => {
        resolve({
          output: chunks.join(""),
          exitCode,
          truncated: false,
        });
      });
    });
  }

  async uploadFiles(
    files: Array<[string, Uint8Array]>,
  ): Promise<FileUploadResponse[]> {
    // Implement file upload logic
    return files.map(([path]) => ({ path, error: null }));
  }

  async downloadFiles(paths: string[]): Promise<FileDownloadResponse[]> {
    // Implement file download logic
    return paths.map((path) => ({
      path,
      content: null,
      error: "file_not_found",
    }));
  }
}

// Use the sandbox with your agent
const sandbox = new LocalShellSandbox("./workspace");

const agent = createDeepAgent({
  backend: sandbox,
  systemPrompt: "You can run shell commands using the execute tool.",
});
```

When using a sandbox backend, the agent gains access to an `execute` tool that can run shell commands. The tool automatically returns the command output, exit code, and whether the output was truncated.

See [examples/sandbox/local-sandbox.ts](https://github.com/langchain-ai/deepagentsjs/blob/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/examples/sandbox/local-sandbox.ts) for a complete implementation.

## Deep Agents Middleware

Deep Agents are built with a modular middleware architecture. As a reminder, Deep Agents have access to:

- A planning tool
- A filesystem for storing context and long-term memories
- The ability to spawn subagents

Each of these features is implemented as separate middleware. When you create a deep agent with `createDeepAgent`, we automatically attach **todoListMiddleware**, **FilesystemMiddleware** and **SubAgentMiddleware** to your agent.

Middleware is a composable concept, and you can choose to add as many or as few middleware to an agent depending on your use case. That means that you can also use any of the aforementioned middleware independently!

### TodoListMiddleware

Planning is integral to solving complex problems. If you've used claude code recently, you'll notice how it writes out a To-Do list before tackling complex, multi-part tasks. You'll also notice how it can adapt and update this To-Do list on the fly as more information comes in.

**todoListMiddleware** provides your agent with a tool specifically for updating this To-Do list. Before, and while it executes a multi-part task, the agent is prompted to use the write_todos tool to keep track of what its doing, and what still needs to be done.

```typescript
import { createAgent, todoListMiddleware } from "langchain";

// todoListMiddleware is included by default in createDeepAgent
// You can customize it if building a custom agent
const agent = createAgent({
  model: "claude-sonnet-4-20250514",
  middleware: [
    todoListMiddleware({
      // Optional: Custom addition to the system prompt
      systemPrompt: "Use the write_todos tool to...",
    }),
  ],
});
```

### FilesystemMiddleware

Context engineering is one of the main challenges in building effective agents. This can be particularly hard when using tools that can return variable length results (ex. web_search, rag), as long ToolResults can quickly fill up your context window.

**FilesystemMiddleware** provides tools to your agent to interact with both short-term and long-term memory:

- **ls**: List the files in your filesystem
- **read_file**: Read an entire file, or a certain number of lines from a file
- **write_file**: Write a new file to your filesystem
- **edit_file**: Edit an existing file in your filesystem
- **glob**: Find files matching a pattern
- **grep**: Search for text within files
- **execute**: Run shell commands (only available when using a `SandboxBackendProtocol`)

```typescript
import { createAgent } from "langchain";
import { createFilesystemMiddleware } from "deepagents";

// FilesystemMiddleware is included by default in createDeepAgent
// You can customize it if building a custom agent
const agent = createAgent({
  model: "claude-sonnet-4-20250514",
  middleware: [
    createFilesystemMiddleware({
      backend: ..., // Optional: customize storage backend
      systemPrompt: "Write to the filesystem when...", // Optional custom system prompt override
      customToolDescriptions: {
        ls: "Use the ls tool when...",
        read_file: "Use the read_file tool to...",
      }, // Optional: Custom descriptions for filesystem tools
    }),
  ],
});
```

### SubAgentMiddleware

Handing off tasks to subagents is a great way to isolate context, keeping the context window of the main (supervisor) agent clean while still going deep on a task. The subagents middleware allows you supply subagents through a task tool.

A subagent is defined with a name, description, system prompt, and tools. You can also provide a subagent with a custom model, or with additional middleware. This can be particularly useful when you want to give the subagent an additional state key to share with the main agent.

```typescript
import { tool } from "langchain";
import { createAgent } from "langchain";
import { createSubAgentMiddleware, type SubAgent } from "deepagents";
import { z } from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `The weather in ${city} is sunny.`;
  },
  {
    name: "get_weather",
    description: "Get the weather in a city.",
    schema: z.object({
      city: z.string(),
    }),
  },
);

const weatherSubagent: SubAgent = {
  name: "weather",
  description: "This subagent can get weather in cities.",
  systemPrompt: "Use the get_weather tool to get the weather in a city.",
  tools: [getWeather],
  model: "gpt-4o",
  middleware: [],
};

const agent = createAgent({
  model: "claude-sonnet-4-20250514",
  middleware: [
    createSubAgentMiddleware({
      defaultModel: "claude-sonnet-4-20250514",
      defaultTools: [],
      subagents: [weatherSubagent],
    }),
  ],
});
```

## ACP (Agent Client Protocol) Support

Deep Agents can be exposed as an [Agent Client Protocol](https://agentclientprotocol.com) server, enabling integration with IDEs like [Zed](https://zed.dev), JetBrains, and other ACP-compatible clients through a standardized JSON-RPC 2.0 protocol over stdio.

The `deepagents-acp` package wraps your Deep Agent with ACP support:

```bash
npm install deepagents-acp
```

The quickest way to get started is via the CLI:

```bash
npx deepagents-acp --name my-agent --workspace /path/to/project
```

Or programmatically:

```typescript
import { startServer } from "deepagents-acp";

await startServer({
  agents: {
    name: "coding-assistant",
    description: "AI coding assistant with filesystem access",
    skills: ["./skills/"],
  },
  workspaceRoot: process.cwd(),
});
```

To use with Zed, add the following to your Zed settings:

```json
{
  "agent": {
    "profiles": {
      "deepagents": {
        "name": "DeepAgents",
        "command": "npx",
        "args": ["deepagents-acp"]
      }
    }
  }
}
```

See the [deepagents-acp README](https://github.com/langchain-ai/deepagentsjs/blob/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/libs/acp/README.md) and the [ACP server example](https://github.com/langchain-ai/deepagentsjs/tree/1a503b23dfa3e400f722264cb4a2f189959b4f17/libs/deepagents/examples/acp-server/) for full documentation and advanced configuration.
- [ACP](/javascript/deepagents-acp)
  # deepagents-acp

ACP (Agent Client Protocol) server for DeepAgents - enables integration with IDEs like Zed, JetBrains, and other ACP-compatible clients.

## Overview

This package wraps DeepAgents with the [Agent Client Protocol (ACP)](https://agentclientprotocol.com), allowing your AI agents to communicate with code editors and development tools through a standardized protocol.

### What is ACP?

The [Agent Client Protocol](https://agentclientprotocol.com) is an open standard for communication between code editors and AI-powered coding agents — similar to what LSP did for language servers. It enables:

- **IDE Integration**: Connect your agents to Zed, JetBrains IDEs, Neovim, Emacs, and other compatible tools
- **Standardized Communication**: JSON-RPC 2.0 based protocol over stdio
- **Rich Interactions**: Text, images, file operations, tool calls, terminals, diffs, and permission requests
- **Session Management**: Persistent conversations with full history replay
- **No Vendor Lock-in**: Use any model, switch between agents, all through one open protocol
- **ACP Registry**: One-click agent installation from within supported IDEs

## Installation

```bash
npm install deepagents-acp
# or
pnpm add deepagents-acp
```

## Quick Start

### Using the CLI (Recommended)

The easiest way to start is with the CLI:

```bash
# Run with defaults
npx deepagents-acp

# With custom options
npx deepagents-acp --name my-agent --debug

# Full options
npx deepagents-acp \
  --name coding-assistant \
  --model claude-sonnet-4-5-20250929 \
  --workspace /path/to/project \
  --skills ./skills,~/.deepagents/skills \
  --debug
```

### CLI Options

| Option                 | Short | Description                                       |
| ---------------------- | ----- | ------------------------------------------------- |
| `--name <name>`        | `-n`  | Agent name (default: "deepagents")                |
| `--description <desc>` | `-d`  | Agent description                                 |
| `--model <model>`      | `-m`  | LLM model (default: "claude-sonnet-4-5-20250929") |
| `--workspace <path>`   | `-w`  | Workspace root directory (default: cwd)           |
| `--skills <paths>`     | `-s`  | Comma-separated skill paths                       |
| `--memory <paths>`     |       | Comma-separated AGENTS.md paths                   |
| `--debug`              |       | Enable debug logging to stderr                    |
| `--help`               | `-h`  | Show help message                                 |
| `--version`            | `-v`  | Show version                                      |

### Environment Variables

| Variable            | Description                                    |
| ------------------- | ---------------------------------------------- |
| `ANTHROPIC_API_KEY` | API key for Anthropic/Claude models (required) |
| `OPENAI_API_KEY`    | API key for OpenAI models                      |
| `DEBUG`             | Set to "true" to enable debug logging          |
| `WORKSPACE_ROOT`    | Alternative to --workspace flag                |

### Programmatic Usage

```typescript
import { startServer } from "deepagents-acp";

await startServer({
  agents: {
    name: "coding-assistant",
    description: "AI coding assistant with filesystem access",
  },
  workspaceRoot: process.cwd(),
});
```

### Advanced Configuration

```typescript
import { DeepAgentsServer } from "deepagents-acp";
import { FilesystemBackend } from "deepagents";

const server = new DeepAgentsServer({
  // Define multiple agents
  agents: [
    {
      name: "code-agent",
      description: "Full-featured coding assistant",
      model: "claude-sonnet-4-5-20250929",
      skills: ["./skills/"],
      memory: ["./.deepagents/AGENTS.md"],
    },
    {
      name: "reviewer",
      description: "Code review specialist",
      model: "claude-sonnet-4-5-20250929",
      systemPrompt: "You are a code review expert...",
    },
  ],

  // Server options
  serverName: "my-deepagents-acp",
  serverVersion: "1.0.0",
  workspaceRoot: process.cwd(),
  debug: true,
});

await server.start();
```

### Multiple Agents

When you define multiple agents, the client selects which agent to use at session creation time by passing `configOptions.agent` in the `session/new` ACP request. If not specified, the first agent in the configuration is used by default.

```typescript
// Client sends session/new with configOptions to select an agent:
// { "configOptions": { "agent": "reviewer" } }  → uses the "reviewer" agent
// { "configOptions": { "agent": "code-agent" } } → uses the "code-agent" agent
// { }                                            → uses the first agent ("code-agent")
```

> **Note:** Some ACP clients (like Zed) don't currently expose a UI for passing `configOptions` at session creation. In that case, consider running separate server instances with a single agent each, or using separate Zed profiles pointing to different server scripts.

## Usage with Zed

To use with [Zed](https://zed.dev), add the agent to your settings (`~/.config/zed/settings.json` on Linux, `~/Library/Application Support/Zed/settings.json` on macOS):

### Simple Setup

```json
{
  "agent": {
    "profiles": {
      "deepagents": {
        "name": "DeepAgents",
        "command": "npx",
        "args": ["deepagents-acp"]
      }
    }
  }
}
```

### With Options

```json
{
  "agent": {
    "profiles": {
      "deepagents": {
        "name": "DeepAgents",
        "command": "npx",
        "args": [
          "deepagents-acp",
          "--name",
          "my-assistant",
          "--skills",
          "./skills",
          "--debug"
        ],
        "env": {
          "ANTHROPIC_API_KEY": "sk-ant-..."
        }
      }
    }
  }
}
```

### Custom Script (Advanced)

For more control, create a custom script:

```typescript
// server.ts
import { startServer } from "deepagents-acp";

await startServer({
  agents: {
    name: "my-agent",
    description: "My custom coding agent",
    skills: ["./skills/"],
  },
});
```

Then configure Zed:

```json
{
  "agent": {
    "profiles": {
      "my-agent": {
        "name": "My Agent",
        "command": "npx",
        "args": ["tsx", "./server.ts"]
      }
    }
  }
}
```

## API Reference

### DeepAgentsServer

The main server class that handles ACP communication.

```typescript
import { DeepAgentsServer } from "deepagents-acp";

const server = new DeepAgentsServer(options);
```

#### Options

| Option          | Type                                   | Description                                     |
| --------------- | -------------------------------------- | ----------------------------------------------- |
| `agents`        | `DeepAgentConfig \| DeepAgentConfig[]` | Agent configuration(s)                          |
| `serverName`    | `string`                               | Server name for ACP (default: "deepagents-acp") |
| `serverVersion` | `string`                               | Server version (default: "0.0.1")               |
| `workspaceRoot` | `string`                               | Workspace root directory (default: cwd)         |
| `debug`         | `boolean`                              | Enable debug logging (default: false)           |

#### DeepAgentConfig

| Option         | Type                                           | Description                                       |
| -------------- | ---------------------------------------------- | ------------------------------------------------- |
| `name`         | `string`                                       | Unique agent name (required)                      |
| `description`  | `string`                                       | Agent description                                 |
| `model`        | `string`                                       | LLM model (default: "claude-sonnet-4-5-20250929") |
| `tools`        | `StructuredTool[]`                             | Custom tools                                      |
| `systemPrompt` | `string`                                       | Custom system prompt                              |
| `middleware`   | `AgentMiddleware[]`                            | Custom middleware                                 |
| `backend`      | `BackendProtocol \| BackendFactory`            | Filesystem backend                                |
| `skills`       | `string[]`                                     | Skill source paths                                |
| `memory`       | `string[]`                                     | Memory source paths (AGENTS.md)                   |
| `interruptOn`  | `Record<string, boolean \| InterruptOnConfig>` | Tools requiring user approval (HITL)              |
| `commands`     | `Array<{ name, description, input? }>`         | Custom slash commands                             |

### Methods

#### start()

Start the ACP server. Listens on stdio by default.

```typescript
await server.start();
```

#### stop()

Stop the server and cleanup.

```typescript
server.stop();
```

### startServer()

Convenience function to create and start a server.

```typescript
import { startServer } from "deepagents-acp";

const server = await startServer(options);
```

## Features

### Slash Commands

The server provides built-in slash commands accessible from the IDE's prompt input. Type `/` to see available commands:

| Command   | Description                                |
| --------- | ------------------------------------------ |
| `/plan`   | Switch to plan mode (read-only planning)   |
| `/agent`  | Switch to agent mode (full autonomous)     |
| `/ask`    | Switch to ask mode (Q&A, no file changes)  |
| `/clear`  | Clear conversation context and start fresh |
| `/status` | Show session status and loaded skills      |

You can also define custom slash commands per agent:

```typescript
const server = new DeepAgentsServer({
  agents: {
    name: "my-agent",
    commands: [
      { name: "test", description: "Run the project's test suite" },
      { name: "lint", description: "Run linter and fix issues" },
    ],
  },
});
```

### Modes

The server supports three operating modes, switchable via slash commands or programmatically:

1. **Agent Mode** (`agent`): Full autonomous agent with file access
2. **Plan Mode** (`plan`): Planning and discussion without changes
3. **Ask Mode** (`ask`): Q&A without file modifications

### Thinking / Reasoning Messages

When using models with extended thinking (e.g., Claude with `thinking: { type: "enabled" }`), the server streams reasoning tokens to the IDE as `thought_message_chunk` updates. This gives users visibility into the agent's chain-of-thought process in clients that support it.

### Tool Call Enhancements

The server provides rich tool call reporting to the IDE:

- **Tool call kinds** — each tool call is categorized using [ACP-standard kinds](https://agentclientprotocol.com/protocol/tool-calls) (`read`, `edit`, `search`, `execute`, `think`, etc.) so the IDE can display appropriate icons
- **File locations (follow-along)** — tool calls that operate on files (e.g., `read_file`, `edit_file`, `grep`) report `{ path, line }` locations, enabling IDEs to open and highlight the files the agent is working with in real time
- **Diff content** — when the agent edits a file, the tool call update includes `{ type: "diff", path, oldText, newText }` content so the IDE can render inline diffs
- **Raw input/output** — tool call notifications include the raw tool arguments and results for transparency

### Human-in-the-Loop (Permission Requests)

When agents are configured with `interruptOn`, the server bridges LangGraph's interrupt system to the ACP `session/request_permission` protocol. This surfaces approval prompts in the IDE before sensitive tools execute:

```typescript
const server = new DeepAgentsServer({
  agents: {
    name: "careful-agent",
    interruptOn: {
      execute: { allowedDecisions: ["approve", "edit", "reject"] },
      write_file: true,
    },
  },
});
```

When the agent calls a protected tool, the IDE shows a permission dialog with options:

- **Allow once** — approve this specific invocation
- **Reject** — deny this specific invocation
- **Always allow** — approve and remember for this session
- **Always reject** — deny and remember for this session

### Terminal Integration

When the ACP client supports the `terminal` capability (e.g., Zed, JetBrains), the server uses the client's terminal for `execute` tool calls instead of running commands locally. This provides:

- **Live streaming output** — terminal output scrolls in real time inside the IDE's agent panel
- **Process control** — the IDE can kill long-running commands
- **Embedded display** — terminal output is embedded directly in the tool call UI

If the client doesn't support terminals, commands fall back to local execution (current behavior).

### Session Persistence

Sessions are persisted using LangGraph's checkpointer. When loading a session with `session/load`, the server replays the full conversation history back to the client via ACP notifications, including:

- User messages
- Agent responses
- Tool calls and their results
- Plan entries

This ensures the IDE shows the complete conversation when resuming a session.

### ACP Filesystem Backend

When the ACP client advertises `fs.readTextFile` and `fs.writeTextFile` capabilities, the server can proxy file operations through the client instead of reading/writing directly from disk. This enables:

- **Unsaved buffer access** — the agent reads the editor's current buffer, including unsaved changes
- **IDE-tracked modifications** — file writes go through the IDE, enabling undo, change tracking, and diff highlighting

Falls back to local filesystem operations for `ls`, `glob`, and `grep` which have no ACP equivalents.

## ACP Protocol Support

This package implements the following ACP methods:

### Agent Methods (what we implement)

| Method             | Description                                         |
| ------------------ | --------------------------------------------------- |
| `initialize`       | Negotiate versions and capabilities                 |
| `authenticate`     | Handle authentication (passthrough)                 |
| `session/new`      | Create a new conversation session                   |
| `session/load`     | Resume an existing session with full history replay |
| `session/prompt`   | Process user prompts and slash commands             |
| `session/cancel`   | Cancel ongoing operations                           |
| `session/set_mode` | Switch agent modes                                  |

### Client Methods (what we call on the client)

| Method                       | Description                                    |
| ---------------------------- | ---------------------------------------------- |
| `session/request_permission` | Prompt user to approve/reject tool calls       |
| `fs/read_text_file`          | Read file contents (including unsaved buffers) |
| `fs/write_text_file`         | Write file contents through the IDE            |
| `terminal/create`            | Start a command in the client's terminal       |
| `terminal/output`            | Get terminal output                            |
| `terminal/wait_for_exit`     | Wait for command completion                    |
| `terminal/kill`              | Kill a running command                         |
| `terminal/release`           | Release terminal resources                     |

### Session Updates (what we send)

| Update                      | Description                                                   |
| --------------------------- | ------------------------------------------------------------- |
| `agent_message_chunk`       | Stream agent text responses                                   |
| `thought_message_chunk`     | Stream agent thinking/reasoning                               |
| `tool_call`                 | Notify about tool invocations with kind, locations, and input |
| `tool_call_update`          | Update tool call status with content (text, diffs, terminals) |
| `plan`                      | Send task plan entries                                        |
| `available_commands_update` | Advertise slash commands to the client                        |

### Capabilities

The server advertises these capabilities:

- `loadSession`: Session persistence with history replay
- `promptCapabilities.image`: Image content support
- `promptCapabilities.embeddedContext`: Embedded context support
- `sessionCapabilities.modes`: Agent mode switching
- `sessionCapabilities.commands`: Slash command support

### Tool Call Kinds

Tool calls are categorized with [ACP-standard kinds](https://agentclientprotocol.com/protocol/tool-calls) for proper icon display:

| Kind      | Tools                     |
| --------- | ------------------------- |
| `read`    | `read_file`, `ls`         |
| `search`  | `grep`, `glob`            |
| `edit`    | `write_file`, `edit_file` |
| `execute` | `execute`, `shell`        |
| `think`   | `write_todos`             |
| `other`   | `task`, custom tools      |

## Architecture

```txt
┌─────────────────────────────────────────────────────────────┐
│                    IDE (Zed, JetBrains)                     │
│                      ACP Client                             │
└─────────────────────┬───────────────────────────────────────┘
                      │ stdio (JSON-RPC 2.0)
                      ▼
┌─────────────────────────────────────────────────────────────┐
│                  deepagents-acp                          │
│   ┌─────────────────────────────────────────────────────┐   │
│   │              AgentSideConnection                    │   │
│   │   (from @agentclientprotocol/sdk)                   │   │
│   └─────────────────────┬───────────────────────────────┘   │
│                         │                                   │
│   ┌─────────────────────▼───────────────────────────────┐   │
│   │              Message Adapter                        │   │
│   │   ACP ContentBlock ←→ LangChain Messages            │   │
│   └─────────────────────┬───────────────────────────────┘   │
│                         │                                   │
│   ┌─────────────────────▼───────────────────────────────┐   │
│   │               DeepAgent                             │   │
│   │  (from deepagents package)                          │   │
│   └─────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘
```

## Examples

### Custom Backend

```typescript
import { DeepAgentsServer } from "deepagents-acp";
import { CompositeBackend, FilesystemBackend, StateBackend } from "deepagents";

const server = new DeepAgentsServer({
  agents: {
    name: "custom-agent",
    backend: new CompositeBackend({
      routes: [
        {
          prefix: "/workspace",
          backend: new FilesystemBackend({ rootDir: "./workspace" }),
        },
        { prefix: "/", backend: (config) => new StateBackend(config) },
      ],
    }),
  },
});
```

### With Custom Tools

```typescript
import { DeepAgentsServer } from "deepagents-acp";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const searchTool = tool(
  async ({ query }) => {
    // Search implementation
    return `Results for: ${query}`;
  },
  {
    name: "search",
    description: "Search the codebase",
    schema: z.object({ query: z.string() }),
  },
);

const server = new DeepAgentsServer({
  agents: {
    name: "search-agent",
    tools: [searchTool],
  },
});
```

### With Human-in-the-Loop Approval

```typescript
import { DeepAgentsServer } from "deepagents-acp";

const server = new DeepAgentsServer({
  agents: {
    name: "safe-agent",
    description: "Agent that asks before writing or executing",
    interruptOn: {
      write_file: true,
      edit_file: true,
      execute: {
        allowedDecisions: ["approve", "edit", "reject"],
      },
    },
  },
});
```

When the agent tries to write a file or run a command, the IDE will prompt the user to approve, reject, or always-allow the operation.

### With Custom Slash Commands

```typescript
import { DeepAgentsServer } from "deepagents-acp";

const server = new DeepAgentsServer({
  agents: {
    name: "project-agent",
    commands: [
      { name: "test", description: "Run the project test suite" },
      { name: "build", description: "Build the project" },
      {
        name: "deploy",
        description: "Deploy to staging",
        input: { hint: "environment (staging or production)" },
      },
    ],
  },
});
```

## ACP Registry

DeepAgents is available in the [ACP Agent Registry](https://agentclientprotocol.com/registry/index) for one-click installation in Zed and JetBrains IDEs. The registry manifest is at `agent.json`:

```json
{
  "id": "deepagents",
  "name": "DeepAgents",
  "description": "Batteries-included AI coding agent powered by LangChain.",
  "distribution": {
    "npx": {
      "package": "deepagents-acp"
    }
  }
}
```

## Contributing

See the main [deepagentsjs repository](https://github.com/langchain-ai/deepagentsjs) for contribution guidelines.

## License

MIT

## Resources

- [Agent Client Protocol Documentation](https://agentclientprotocol.com)
- [ACP Agent Registry](https://agentclientprotocol.com/registry/index)
- [ACP TypeScript SDK](https://github.com/agentclientprotocol/typescript-sdk)
- [DeepAgents Documentation](https://github.com/langchain-ai/deepagentsjs)
- [Zed Editor](https://zed.dev)
- [JetBrains ACP Support](https://www.jetbrains.com/acp/)
- [Modal](/javascript/langchain-modal)
  # @langchain/modal

Modal Sandbox backend for [deepagents](https://www.npmjs.com/package/deepagents). This package provides a `ModalSandbox` implementation of the `SandboxBackendProtocol`, enabling agents to execute commands, read/write files, and manage isolated container environments using Modal's serverless infrastructure.

[![npm version](https://img.shields.io/npm/v/@langchain/modal.svg)](https://www.npmjs.com/package/@langchain/modal)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Features

- **Isolated Execution**: Run commands in secure, isolated containers on Modal's serverless infrastructure
- **GPU Support**: Access NVIDIA GPUs (T4, L4, A10G, A100, H100) for ML/AI workloads
- **Custom Images**: Use any Docker image from public registries
- **File Operations**: Upload and download files with full filesystem access
- **Volume Mounts**: Mount Modal Volumes for persistent storage
- **Secrets Injection**: Securely inject Modal Secrets as environment variables
- **BaseSandbox Integration**: All inherited methods (`read`, `write`, `edit`, `ls`, `grep`, `glob`) work out of the box
- **Factory Pattern**: Compatible with deepagents' middleware architecture
- **Full SDK Access**: Access the underlying Modal SDK via the `sandbox` property for advanced features

## Installation

```bash
# npm
npm install @langchain/modal

# yarn
yarn add @langchain/modal

# pnpm
pnpm add @langchain/modal
```

## Authentication Setup

The package requires Modal authentication:

### Environment Variables (Recommended)

1. Go to [https://modal.com/settings/tokens](https://modal.com/settings/tokens)
2. Create a new token and set the environment variables:

```bash
export MODAL_TOKEN_ID=your_token_id
export MODAL_TOKEN_SECRET=your_token_secret
```

### Explicit Credentials in Code

```typescript
const sandbox = await ModalSandbox.create({
  auth: {
    tokenId: "your-token-id",
    tokenSecret: "your-token-secret",
  },
});
```

## Basic Usage

```typescript
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";
import { ModalSandbox } from "@langchain/modal";

// Create and initialize the sandbox
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  timeoutMs: 600_000, // 10 minutes
});

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant with access to a sandbox.",
    backend: sandbox,
  });

  const result = await agent.invoke({
    messages: [
      { role: "user", content: "Create a hello world Python app and run it" },
    ],
  });
} finally {
  await sandbox.close();
}
```

## Configuration Options

`ModalSandboxOptions` extends the Modal SDK's `SandboxCreateParams` directly, so you can use any SDK option. We only wrap `volumes` and `secrets` to accept names instead of objects.

```typescript
interface ModalSandboxOptions extends SandboxCreateParams {
  /** Modal App name. @default "deepagents-sandbox" */
  appName?: string;

  /** Docker image to use. @default "alpine:3.21" */
  imageName?: string;

  /** Initial files to populate the sandbox with. */
  initialFiles?: Record<string, string | Uint8Array>;

  /** Authentication credentials (or use env vars). */
  auth?: { tokenId?: string; tokenSecret?: string };

  /** Modal Volume names to mount (keys are mount paths). */
  volumes?: Record<string, string>;

  /** Modal Secret names to inject. */
  secrets?: string[];

  // All SandboxCreateParams options are available:
  timeoutMs?: number; // Max lifetime in milliseconds
  idleTimeoutMs?: number; // Idle timeout in milliseconds
  workdir?: string; // Working directory
  gpu?: string; // GPU type (e.g., "T4", "A100")
  cpu?: number; // CPU cores (fractional allowed)
  memoryMiB?: number; // Memory in MiB
  regions?: string[]; // Regions to run in
  env?: Record<string, string>; // Environment variables
  blockNetwork?: boolean; // Block network access
  cidrAllowlist?: string[]; // Allowed CIDRs
  verbose?: boolean; // Enable verbose logging
  name?: string; // Sandbox name (unique within app)
}
```

## GPU Support

Modal supports various NVIDIA GPUs for ML/AI workloads:

```typescript
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  gpu: "T4", // NVIDIA T4 (16GB VRAM)
  // gpu: "L4",    // NVIDIA L4 (24GB VRAM)
  // gpu: "A10G",  // NVIDIA A10G (24GB VRAM)
  // gpu: "A100",  // NVIDIA A100 (40/80GB VRAM)
  // gpu: "H100",  // NVIDIA H100 (80GB VRAM)
});
```

## Using Volumes

Mount Modal Volumes for persistent storage:

```typescript
// Volume must be created in Modal first
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  volumes: {
    "/data": "my-data-volume",
    "/models": "my-models-volume",
  },
});

// Files in /data and /models persist across sandbox restarts
await sandbox.execute("echo 'Hello' > /data/test.txt");
```

## Using Secrets

Inject Modal Secrets as environment variables:

```typescript
// Secrets must be created in Modal first
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  secrets: ["my-api-keys", "database-credentials"],
});

// Secrets are available as environment variables
await sandbox.execute("echo $API_KEY");
```

## Initial Files

Pre-populate the sandbox with files during creation:

```typescript
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  initialFiles: {
    // String content
    "/app/main.py": `
import json

def main():
    print("Hello from Python!")

if __name__ == "__main__":
    main()
`,
    // JSON configuration
    "/app/config.json": JSON.stringify(
      { name: "my-app", version: "1.0.0" },
      null,
      2,
    ),

    // Uint8Array content also supported
    "/app/data.bin": new Uint8Array([0x00, 0x01, 0x02]),
  },
});

// Files are ready to use immediately
const result = await sandbox.execute("python /app/main.py");
console.log(result.output); // "Hello from Python!"
```

This is especially useful for:

- Setting up project scaffolding before agent execution
- Providing configuration files
- Pre-loading test data or fixtures
- Creating initial source code files

## Accessing the Modal SDK

For advanced features not exposed by `BaseSandbox`, you can access the underlying Modal SDK directly:

- `.client` - The `ModalClient` instance for accessing other Modal resources
- `.instance` - The `Sandbox` instance for direct sandbox operations

```typescript
const modalSandbox = await ModalSandbox.create();

// Access the Modal client for other Modal resources
const client = modalSandbox.client;

// Access the raw Modal Sandbox for direct operations
const instance = modalSandbox.instance;

// Execute commands with specific options
const process = await instance.exec(["python", "-c", "print('Hello')"], {
  stdout: "pipe",
  stderr: "pipe",
});

// Open files for reading/writing
const writeHandle = await instance.open("/tmp/file.txt", "w");
await writeHandle.write(new TextEncoder().encode("Hello"));
await writeHandle.close();
```

## Reconnecting to Existing Sandboxes

Resume working with a sandbox that is still running:

```typescript
// First session: create a sandbox
const sandbox = await ModalSandbox.create({
  imageName: "python:3.12-slim",
  timeout: 3600, // 1 hour
});
const sandboxId = sandbox.id;

// Later: reconnect to the same sandbox by ID
const reconnected = await ModalSandbox.fromId(sandboxId);
const result = await reconnected.execute("ls -la");

// Or reconnect by name (if sandbox has a name)
const sandbox2 = await ModalSandbox.create({
  appName: "my-app",
  sandboxName: "my-sandbox",
  imageName: "python:3.12-slim",
});

const reconnected2 = await ModalSandbox.fromName("my-app", "my-sandbox");
```

## Error Handling

```typescript
import { ModalSandboxError } from "@langchain/modal";

try {
  await sandbox.execute("some command");
} catch (error) {
  if (error instanceof ModalSandboxError) {
    switch (error.code) {
      case "NOT_INITIALIZED":
        await sandbox.initialize();
        break;
      case "COMMAND_TIMEOUT":
        console.error("Command took too long");
        break;
      case "AUTHENTICATION_FAILED":
        console.error("Check your Modal token credentials");
        break;
      default:
        throw error;
    }
  }
}
```

### Error Codes

| Code                      | Description                                 |
| ------------------------- | ------------------------------------------- |
| `NOT_INITIALIZED`         | Sandbox not initialized - call initialize() |
| `ALREADY_INITIALIZED`     | Cannot initialize twice                     |
| `AUTHENTICATION_FAILED`   | Invalid or missing Modal tokens             |
| `SANDBOX_CREATION_FAILED` | Failed to create sandbox                    |
| `SANDBOX_NOT_FOUND`       | Sandbox ID/name not found or expired        |
| `COMMAND_TIMEOUT`         | Command execution timed out                 |
| `COMMAND_FAILED`          | Command execution failed                    |
| `FILE_OPERATION_FAILED`   | File read/write failed                      |
| `RESOURCE_LIMIT_EXCEEDED` | CPU, memory, or storage limits exceeded     |
| `VOLUME_ERROR`            | Volume operation failed                     |

## Inherited BaseSandbox Methods

`ModalSandbox` extends `BaseSandbox` and inherits these convenience methods:

| Method       | Description                   |
| ------------ | ----------------------------- |
| `read()`     | Read a file's contents        |
| `write()`    | Write content to a file       |
| `edit()`     | Replace text in a file        |
| `lsInfo()`   | List directory contents       |
| `grepRaw()`  | Search for patterns in files  |
| `globInfo()` | Find files matching a pattern |

## Limits and Constraints

| Constraint      | Value                                   |
| --------------- | --------------------------------------- |
| Max timeout     | 86400 seconds (24 hours)                |
| Default timeout | 300 seconds (5 minutes)                 |
| Network access  | Full (by default, can be blocked)       |
| File API        | Alpha (up to 100 MiB read, 1 GiB write) |

## Environment Variables

| Variable             | Description            |
| -------------------- | ---------------------- |
| `MODAL_TOKEN_ID`     | Modal API token ID     |
| `MODAL_TOKEN_SECRET` | Modal API token secret |

## License

MIT
- [Daytona](/javascript/langchain-daytona)
  # @langchain/daytona

Daytona Sandbox backend for [deepagents](https://www.npmjs.com/package/deepagents). This package provides a `DaytonaSandbox` implementation of the `SandboxBackendProtocol`, enabling agents to execute commands, read/write files, and manage isolated sandbox environments using Daytona's infrastructure.

[![npm version](https://img.shields.io/npm/v/@langchain/daytona.svg)](https://www.npmjs.com/package/@langchain/daytona)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Features

- **Isolated Execution**: Run commands in secure, isolated sandbox environments
- **Multi-Language Support**: TypeScript, JavaScript, and Python runtimes
- **File Operations**: Upload and download files with full filesystem access
- **BaseSandbox Integration**: All inherited methods (`read`, `write`, `edit`, `ls`, `grep`, `glob`) work out of the box
- **Factory Pattern**: Compatible with deepagents' middleware architecture
- **Full SDK Access**: Access the underlying Daytona SDK via the `sandbox` property for advanced features

## Installation

```bash
# npm
npm install @langchain/daytona

# yarn
yarn add @langchain/daytona

# pnpm
pnpm add @langchain/daytona
```

## Authentication Setup

The package requires Daytona API authentication:

### Environment Variable (Recommended)

1. Go to [https://app.daytona.io](https://app.daytona.io)
2. Create an account and get your API key
3. Set it as an environment variable:

```bash
export DAYTONA_API_KEY=your_api_key_here
```

### Explicit API Key in Code

```typescript
const sandbox = await DaytonaSandbox.create({
  auth: { apiKey: "your-api-key-here" },
});
```

## Basic Usage

```typescript
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";
import { DaytonaSandbox } from "@langchain/daytona";

// Create and initialize the sandbox
const sandbox = await DaytonaSandbox.create({
  language: "typescript",
  timeout: 300, // 5 minutes
});

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant with access to a sandbox.",
    backend: sandbox,
  });

  const result = await agent.invoke({
    messages: [
      {
        role: "user",
        content: "Create a hello world TypeScript app and run it",
      },
    ],
  });
} finally {
  await sandbox.close();
}
```

## Configuration Options

```typescript
interface DaytonaSandboxOptions {
  /**
   * Primary language for code execution.
   * @default "typescript"
   */
  language?: "typescript" | "python" | "javascript";

  /**
   * Environment variables to set in the sandbox.
   */
  envVars?: Record<string, string>;

  /**
   * Custom Docker image to use (e.g., "node:20", "python:3.12").
   * Required when you want to customize resources.
   */
  image?: string;

  /**
   * Snapshot name to use for the sandbox.
   * Cannot be used together with `image`.
   */
  snapshot?: string;

  /**
   * Resource allocation (only available when using `image`).
   */
  resources?: {
    cpu?: number; // Number of CPUs
    memory?: number; // Memory in GiB
    disk?: number; // Disk space in GiB
  };

  /**
   * Target region.
   * @default "us"
   */
  target?: "us" | "eu";

  /**
   * Auto-stop interval in minutes. Set to 0 to disable.
   * @default 15
   */
  autoStopInterval?: number;

  /**
   * Default timeout for command execution in seconds.
   * @default 300
   */
  timeout?: number;

  /**
   * Custom labels for the sandbox.
   */
  labels?: Record<string, string>;

  /**
   * Authentication configuration.
   */
  auth?: {
    apiKey?: string;
    apiUrl?: string;
  };
}
```

### Using Custom Resources

To customize CPU, memory, or disk, you must specify a Docker image:

```typescript
const sandbox = await DaytonaSandbox.create({
  image: "node:20",
  language: "typescript",
  resources: {
    cpu: 4,
    memory: 8,
    disk: 50,
  },
});
```

## Available Regions

| Region | Location      |
| ------ | ------------- |
| `us`   | United States |
| `eu`   | Europe        |

## Accessing the Daytona SDK

For advanced features not exposed by `BaseSandbox`, you can access the underlying Daytona SDK directly via the `sandbox` property:

```typescript
const daytonaSandbox = await DaytonaSandbox.create();

// Access the raw Daytona SDK
const sdk = daytonaSandbox.sandbox;

// Use any Daytona SDK feature directly
const workDir = await sdk.getWorkDir();
const homeDir = await sdk.getUserHomeDir();

// Execute code with the process interface
const result = await sdk.process.executeCommand("npm install");

// Use the filesystem interface
await sdk.fs.createFolder("src", "755");
await sdk.fs.uploadFile(Buffer.from("content"), "src/index.ts");
```

See the [@daytonaio/sdk documentation](https://www.npmjs.com/package/@daytonaio/sdk) for all available SDK methods.

## Factory Functions

### Creating New Sandboxes Per Invocation

```typescript
import { createDaytonaSandboxFactory } from "@langchain/daytona";

// Each call creates a new sandbox
const factory = createDaytonaSandboxFactory({ language: "typescript" });

const sandbox1 = await factory();
const sandbox2 = await factory();

try {
  // Use sandboxes...
} finally {
  await sandbox1.close();
  await sandbox2.close();
}
```

### Reusing an Existing Sandbox

```typescript
import { createDeepAgent, createFilesystemMiddleware } from "deepagents";
import {
  DaytonaSandbox,
  createDaytonaSandboxFactoryFromSandbox,
} from "@langchain/daytona";

// Create and initialize a sandbox
const sandbox = await DaytonaSandbox.create({ language: "typescript" });

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant.",
    middlewares: [
      createFilesystemMiddleware({
        backend: createDaytonaSandboxFactoryFromSandbox(sandbox),
      }),
    ],
  });

  await agent.invoke({ messages: [...] });
} finally {
  await sandbox.close();
}
```

## Reconnecting to Existing Sandboxes

Resume working with a sandbox that is still running:

```typescript
// First session: create sandbox
const sandbox = await DaytonaSandbox.create({
  language: "typescript",
  autoStopInterval: 60, // Keep alive for 60 minutes of inactivity
});
const sandboxId = sandbox.id;

// Stop the sandbox (keeps it available)
await sandbox.stop();

// Later: reconnect to the same sandbox
const reconnected = await DaytonaSandbox.connect(sandboxId);
await reconnected.start(); // Restart the sandbox
const result = await reconnected.execute("ls -la");
```

## Sandbox Lifecycle

```typescript
const sandbox = await DaytonaSandbox.create();

// Stop sandbox (can be restarted)
await sandbox.stop();

// Start a stopped sandbox
await sandbox.start();

// Delete sandbox permanently
await sandbox.close();

// Or use kill() as an alias
await sandbox.kill();
```

## Error Handling

```typescript
import { DaytonaSandboxError } from "@langchain/daytona";

try {
  await sandbox.execute("some command");
} catch (error) {
  if (error instanceof DaytonaSandboxError) {
    switch (error.code) {
      case "NOT_INITIALIZED":
        await sandbox.initialize();
        break;
      case "COMMAND_TIMEOUT":
        console.error("Command took too long");
        break;
      case "AUTHENTICATION_FAILED":
        console.error("Check your Daytona API key");
        break;
      default:
        throw error;
    }
  }
}
```

### Error Codes

| Code                      | Description                                 |
| ------------------------- | ------------------------------------------- |
| `NOT_INITIALIZED`         | Sandbox not initialized - call initialize() |
| `ALREADY_INITIALIZED`     | Cannot initialize twice                     |
| `AUTHENTICATION_FAILED`   | Invalid or missing Daytona API key          |
| `SANDBOX_CREATION_FAILED` | Failed to create sandbox                    |
| `SANDBOX_NOT_FOUND`       | Sandbox ID not found or deleted             |
| `SANDBOX_NOT_STARTED`     | Sandbox is not in started state             |
| `COMMAND_TIMEOUT`         | Command execution timed out                 |
| `COMMAND_FAILED`          | Command execution failed                    |
| `FILE_OPERATION_FAILED`   | File read/write failed                      |
| `RESOURCE_LIMIT_EXCEEDED` | CPU, memory, or storage limits exceeded     |

## Inherited BaseSandbox Methods

`DaytonaSandbox` extends `BaseSandbox` and inherits these convenience methods:

| Method       | Description                   |
| ------------ | ----------------------------- |
| `read()`     | Read a file's contents        |
| `write()`    | Write content to a file       |
| `edit()`     | Replace text in a file        |
| `lsInfo()`   | List directory contents       |
| `grepRaw()`  | Search for patterns in files  |
| `globInfo()` | Find files matching a pattern |

## Environment Variables

| Variable          | Description                   |
| ----------------- | ----------------------------- |
| `DAYTONA_API_KEY` | Daytona API key (required)    |
| `DAYTONA_API_URL` | Custom Daytona API URL        |
| `DAYTONA_TARGET`  | Default target region (us/eu) |

## License

MIT
- [Deno](/javascript/langchain-deno)
  # @langchain/deno

Deno Sandbox backend for [deepagents](https://www.npmjs.com/package/deepagents). This package provides a `DenoSandbox` implementation of the `SandboxBackendProtocol`, enabling agents to execute commands, read/write files, and manage isolated Linux microVM environments using Deno Deploy's Sandbox infrastructure.

[![npm version](https://img.shields.io/npm/v/@langchain/deno.svg)](https://www.npmjs.com/package/@langchain/deno)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Features

- **Isolated Execution**: Run commands in secure, isolated Linux microVMs
- **File Operations**: Upload and download files with full filesystem access
- **BaseSandbox Integration**: All inherited methods (`read`, `write`, `edit`, `ls`, `grep`, `glob`) work out of the box
- **Factory Pattern**: Compatible with deepagents' middleware architecture
- **Full SDK Access**: Access the underlying Deno SDK via the `sandbox` property for advanced features

## Installation

```bash
# npm
npm install @langchain/deno

# yarn
yarn add @langchain/deno

# pnpm
pnpm add @langchain/deno
```

## Authentication Setup

The package requires Deno Deploy authentication:

### Environment Variable (Recommended)

1. Go to [https://app.deno.com](https://app.deno.com)
2. Navigate to Settings → Organization Tokens
3. Create a new token and set it as an environment variable:

```bash
export DENO_DEPLOY_TOKEN=your_token_here
```

### Explicit Token in Code

```typescript
const sandbox = await DenoSandbox.create({
  auth: { token: "your-token-here" },
});
```

## Basic Usage

```typescript
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";
import { DenoSandbox } from "@langchain/deno";

// Create and initialize the sandbox
const sandbox = await DenoSandbox.create({
  memoryMb: 1024,
  lifetime: "10m",
});

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant with access to a sandbox.",
    backend: sandbox,
  });

  const result = await agent.invoke({
    messages: [
      { role: "user", content: "Create a hello world Deno app and run it" },
    ],
  });
} finally {
  await sandbox.close();
}
```

## Configuration Options

```typescript
interface DenoSandboxOptions {
  /**
   * Memory allocation in megabytes.
   * Min: 768MB, Max: 4096MB
   * @default 768
   */
  memoryMb?: number;

  /**
   * Sandbox lifetime.
   * - "session": Shuts down when you close the client (default)
   * - Duration: e.g., "5m", "30s"
   */
  lifetime?: "session" | `${number}s` | `${number}m`;

  /**
   * Region where the sandbox will be created.
   * If not specified, uses the default region.
   */
  region?: DenoSandboxRegion;

  /**
   * Authentication configuration.
   */
  auth?: {
    token?: string;
  };
}
```

## Available Regions

The sandbox can be deployed in the following regions:

| Region Code | Location  |
| ----------- | --------- |
| `ams`       | Amsterdam |
| `ord`       | Chicago   |

## Accessing the Deno SDK

For advanced features not exposed by `BaseSandbox`, you can access the underlying Deno SDK directly via the `sandbox` property:

```typescript
const denoSandbox = await DenoSandbox.create();

// Access the raw Deno SDK
const sdk = denoSandbox.sandbox;

// Use any Deno SDK feature directly
const url = await sdk.exposeHttp({ port: 3000 });
const ssh = await sdk.exposeSsh();
const result = await sdk.eval("1 + 2");
await sdk.env.set("API_KEY", "secret");

// Use shell template literals
const output = await sdk.sh`echo "Hello from Deno!"`.text();

// Start a JavaScript runtime
const runtime = await sdk.createJsRuntime({ entrypoint: "server.ts" });
```

See the [@deno/sandbox documentation](https://www.npmjs.com/package/@deno/sandbox) for all available SDK methods.

## Factory Functions

### Creating New Sandboxes Per Invocation

```typescript
import { createDenoSandboxFactory } from "@langchain/deno";

// Each call creates a new sandbox
const factory = createDenoSandboxFactory({ memoryMb: 1024 });

const sandbox1 = await factory();
const sandbox2 = await factory();

try {
  // Use sandboxes...
} finally {
  await sandbox1.close();
  await sandbox2.close();
}
```

### Reusing an Existing Sandbox

```typescript
import { createDeepAgent, createFilesystemMiddleware } from "deepagents";
import {
  DenoSandbox,
  createDenoSandboxFactoryFromSandbox,
} from "@langchain/deno";

// Create and initialize a sandbox
const sandbox = await DenoSandbox.create({ memoryMb: 1024 });

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant.",
    middlewares: [
      createFilesystemMiddleware({
        backend: createDenoSandboxFactoryFromSandbox(sandbox),
      }),
    ],
  });

  await agent.invoke({ messages: [...] });
} finally {
  await sandbox.close();
}
```

## Reconnecting to Existing Sandboxes

Resume working with a sandbox that has a duration-based lifetime:

```typescript
// First session: create with duration lifetime
const sandbox = await DenoSandbox.create({
  memoryMb: 1024,
  lifetime: "30m",
});
const sandboxId = sandbox.id;
await sandbox.close(); // Close connection, but sandbox keeps running

// Later: reconnect to the same sandbox
const reconnected = await DenoSandbox.connect(sandboxId);
const result = await reconnected.execute("ls -la");
```

## Error Handling

```typescript
import { DenoSandboxError } from "@langchain/deno";

try {
  await sandbox.execute("some command");
} catch (error) {
  if (error instanceof DenoSandboxError) {
    switch (error.code) {
      case "NOT_INITIALIZED":
        await sandbox.initialize();
        break;
      case "COMMAND_TIMEOUT":
        console.error("Command took too long");
        break;
      case "AUTHENTICATION_FAILED":
        console.error("Check your Deno Deploy token");
        break;
      default:
        throw error;
    }
  }
}
```

### Error Codes

| Code                      | Description                                 |
| ------------------------- | ------------------------------------------- |
| `NOT_INITIALIZED`         | Sandbox not initialized - call initialize() |
| `ALREADY_INITIALIZED`     | Cannot initialize twice                     |
| `AUTHENTICATION_FAILED`   | Invalid or missing Deno Deploy token        |
| `SANDBOX_CREATION_FAILED` | Failed to create sandbox                    |
| `SANDBOX_NOT_FOUND`       | Sandbox ID not found or expired             |
| `COMMAND_TIMEOUT`         | Command execution timed out                 |
| `COMMAND_FAILED`          | Command execution failed                    |
| `FILE_OPERATION_FAILED`   | File read/write failed                      |
| `RESOURCE_LIMIT_EXCEEDED` | CPU, memory, or storage limits exceeded     |

## Inherited BaseSandbox Methods

`DenoSandbox` extends `BaseSandbox` and inherits these convenience methods:

| Method       | Description                   |
| ------------ | ----------------------------- |
| `read()`     | Read a file's contents        |
| `write()`    | Write content to a file       |
| `edit()`     | Replace text in a file        |
| `lsInfo()`   | List directory contents       |
| `grepRaw()`  | Search for patterns in files  |
| `globInfo()` | Find files matching a pattern |

## Limits and Constraints

| Constraint           | Value             |
| -------------------- | ----------------- |
| Minimum memory       | 768 MB            |
| Maximum memory       | 4096 MB (4 GB)    |
| Disk space           | 10 GB             |
| vCPUs                | 2                 |
| Working directory    | `/home/app`       |
| Network access       | Full (by default) |
| Interactive commands | Not supported     |

## Environment Variables

| Variable            | Description                           |
| ------------------- | ------------------------------------- |
| `DENO_DEPLOY_TOKEN` | Deno Deploy organization access token |

## License

MIT
- [Node VFS](/javascript/langchain-node-vfs)
  # @langchain/node-vfs

Node.js Virtual File System backend for [DeepAgents](https://github.com/langchain-ai/deepagentsjs).

This package provides an in-memory VFS implementation that enables agents to work with files in an isolated environment without touching the real filesystem. It uses [node-vfs-polyfill](https://github.com/vercel-labs/node-vfs-polyfill) which implements the upcoming Node.js VFS feature ([nodejs/node#61478](https://github.com/nodejs/node/pull/61478)).

## Installation

```bash
npm install @langchain/node-vfs deepagents
# or
pnpm add @langchain/node-vfs deepagents
```

## Quick Start

```typescript
import { VfsSandbox } from "@langchain/node-vfs";
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";

// Create and initialize a VFS sandbox
const sandbox = await VfsSandbox.create({
  initialFiles: {
    "/src/index.js": "console.log('Hello from VFS!')",
  },
});

try {
  const agent = createDeepAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-20250514" }),
    systemPrompt: "You are a coding assistant with VFS access.",
    backend: sandbox,
  });

  const result = await agent.invoke({
    messages: [{ role: "user", content: "Run the index.js file" }],
  });
} finally {
  await sandbox.stop();
}
```

## Features

- **In-Memory File Storage** - Files are stored in a virtual file system using [node-vfs-polyfill](https://github.com/vercel-labs/node-vfs-polyfill)
- **Zero Setup** - No Docker, cloud services, or external dependencies required
- **Full Command Execution** - Execute shell commands with automatic file syncing
- **Automatic Cleanup** - All resources are cleaned up when sandbox stops
- **Initial Files** - Pre-populate the sandbox with files at creation time
- **Fallback Mode** - Automatically falls back to temp directory if VFS is unavailable

## API Reference

### VfsSandbox

The main class for creating and managing VFS sandboxes.

#### Static Methods

##### `VfsSandbox.create(options?)`

Create and initialize a new VFS sandbox in one step.

```typescript
const sandbox = await VfsSandbox.create({
  mountPath: "/vfs", // Mount path for the VFS (default: "/vfs")
  timeout: 30000, // Command timeout in ms (default: 30000)
  initialFiles: {
    // Initial files to populate
    "/README.md": "# Hello",
    "/src/index.js": "console.log('Hello')",
  },
});
```

#### Instance Methods

##### `sandbox.execute(command)`

Execute a shell command in the sandbox.

```typescript
const result = await sandbox.execute("node src/index.js");
console.log(result.output); // Command output
console.log(result.exitCode); // Exit code (0 = success)
```

##### `sandbox.uploadFiles(files)`

Upload files to the sandbox.

```typescript
const encoder = new TextEncoder();
await sandbox.uploadFiles([
  ["src/app.js", encoder.encode("console.log('Hi')")],
  ["package.json", encoder.encode('{"name": "test"}')],
]);
```

##### `sandbox.downloadFiles(paths)`

Download files from the sandbox.

```typescript
const results = await sandbox.downloadFiles(["src/app.js"]);
for (const result of results) {
  if (result.content) {
    console.log(new TextDecoder().decode(result.content));
  }
}
```

##### `sandbox.stop()`

Stop the sandbox and clean up resources.

```typescript
await sandbox.stop();
```

### Factory Functions

#### `createVfsSandboxFactory(options?)`

Create an async factory that creates new sandboxes per invocation.

```typescript
const factory = createVfsSandboxFactory({
  initialFiles: { "/README.md": "# Hello" },
});

const sandbox = await factory();
```

#### `createVfsSandboxFactoryFromSandbox(sandbox)`

Create a factory that reuses an existing sandbox.

```typescript
const sandbox = await VfsSandbox.create();
const factory = createVfsSandboxFactoryFromSandbox(sandbox);
```

## Configuration Options

| Option         | Type                                   | Default     | Description                               |
| -------------- | -------------------------------------- | ----------- | ----------------------------------------- |
| `mountPath`    | `string`                               | `"/vfs"`    | Mount path for the virtual file system    |
| `timeout`      | `number`                               | `30000`     | Command execution timeout in milliseconds |
| `initialFiles` | `Record<string, string \| Uint8Array>` | `undefined` | Initial files to populate the VFS         |

## Error Handling

The package exports a `VfsSandboxError` class for typed error handling:

```typescript
import { VfsSandboxError } from "@langchain/node-vfs";

try {
  await sandbox.execute("some-command");
} catch (error) {
  if (error instanceof VfsSandboxError) {
    switch (error.code) {
      case "NOT_INITIALIZED":
        // Handle uninitialized sandbox
        break;
      case "COMMAND_TIMEOUT":
        // Handle timeout
        break;
    }
  }
}
```

### Error Codes

- `NOT_INITIALIZED` - Sandbox not initialized
- `ALREADY_INITIALIZED` - Sandbox already initialized
- `INITIALIZATION_FAILED` - Failed to initialize VFS
- `COMMAND_TIMEOUT` - Command execution timed out
- `COMMAND_FAILED` - Command execution failed
- `FILE_OPERATION_FAILED` - File operation failed
- `NOT_SUPPORTED` - VFS not supported in environment

## How It Works

The VFS sandbox uses a hybrid approach for maximum compatibility:

1. **File Storage** - Files are stored in-memory using the `VirtualFileSystem` from [node-vfs-polyfill](https://github.com/vercel-labs/node-vfs-polyfill)
2. **Command Execution** - When executing commands, files are synced to a temp directory, the command runs, and changes are synced back to VFS
3. **Fallback Mode** - If node-vfs-polyfill is unavailable, falls back to using a temp directory for both storage and execution

This approach provides the benefits of in-memory storage (isolation, speed) while maintaining full shell command execution support.

## Future: Native Node.js VFS

This package uses [node-vfs-polyfill](https://github.com/vercel-labs/node-vfs-polyfill) which implements the upcoming Node.js VFS feature being developed in [nodejs/node#61478](https://github.com/nodejs/node/pull/61478).

When the official `node:vfs` module lands in Node.js, this package will be updated to use the native implementation for better performance and compatibility.

## License

MIT
- [Sandbox Standard Tests](/javascript/langchain-sandbox-standard-tests)
  # @langchain/sandbox-standard-tests

Shared integration test suites for [deepagents](https://github.com/langchain-ai/deepagentsjs) sandbox providers. Run a single function call and get comprehensive coverage of the `SandboxBackendProtocol` — lifecycle management, command execution, file I/O, search, and more.

The package is **framework-agnostic** — it works with any test runner that provides `describe`, `it`, `expect`, `beforeAll`, and `afterAll`. A first-class Vitest sub-export is included for convenience.

## Installation

```bash
npm install @langchain/sandbox-standard-tests
```

## Quick start

### With Vitest (recommended)

Import from `@langchain/sandbox-standard-tests/vitest` and the Vitest primitives are injected automatically:

```ts
import { sandboxStandardTests } from "@langchain/sandbox-standard-tests/vitest";
import { MySandbox } from "./sandbox.js";

sandboxStandardTests({
  name: "MySandbox",
  skip: !process.env.MY_SANDBOX_TOKEN,
  timeout: 120_000,
  createSandbox: (opts) => MySandbox.create({ ...opts }),
  closeSandbox: (sb) => sb.close(),
  resolvePath: (name) => `/tmp/${name}`,
});
```

### With any test runner

Import from the root entry point and pass your runner's primitives via the `runner` config property:

```ts
import { sandboxStandardTests } from "@langchain/sandbox-standard-tests";
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { MySandbox } from "./sandbox.js";

sandboxStandardTests({
  name: "MySandbox",
  runner: { describe, it, expect, beforeAll, afterAll },
  createSandbox: (opts) => MySandbox.create({ ...opts }),
  closeSandbox: (sb) => sb.close(),
  resolvePath: (name) => `/tmp/${name}`,
});
```

Run with your test runner of choice:

```bash
npx vitest run sandbox.int.test.ts
```

That single `sandboxStandardTests()` call registers **11 describe blocks** covering every method on the sandbox protocol.

## Configuration

`sandboxStandardTests` accepts a `StandardTestsConfig<T>` object:

| Option                       | Type                               | Required | Description                                                                                                                                                |
| ---------------------------- | ---------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name`                       | `string`                           | yes      | Display name shown in the test runner (e.g. `"ModalSandbox"`).                                                                                             |
| `runner`                     | `TestRunner`                       | yes\*    | Test-runner primitives (`describe`, `it`, `expect`, `beforeAll`, `afterAll`). \*Optional when importing from `/vitest`.                                    |
| `createSandbox`              | `(opts?) => Promise<T>`            | yes      | Factory that creates and returns a running sandbox. Receives an optional `{ initialFiles }` map.                                                           |
| `resolvePath`                | `(relativePath: string) => string` | yes      | Converts a relative filename (e.g. `"test-file.txt"`) to the provider-specific absolute path (e.g. `"/tmp/test-file.txt"` or `"/home/app/test-file.txt"`). |
| `closeSandbox`               | `(sandbox: T) => Promise<void>`    | no       | Teardown function. If omitted the "close" lifecycle test is skipped.                                                                                       |
| `createUninitializedSandbox` | `() => T`                          | no       | Factory for a sandbox that has **not** been started yet. Enables the two-step initialization test.                                                         |
| `skip`                       | `boolean`                          | no       | Skip the entire suite (useful when credentials are missing).                                                                                               |
| `sequential`                 | `boolean`                          | no       | Run tests sequentially instead of in parallel (useful to avoid provider concurrency limits).                                                               |
| `timeout`                    | `number`                           | no       | Per-test timeout in ms. Defaults to `120_000` (2 min).                                                                                                     |

### `TestRunner`

The `runner` object must provide these five primitives from your test framework:

```ts
interface TestRunner {
  describe: SuiteFn;
  it: TestFn;
  expect: ExpectFn;
  beforeAll: HookFn;
  afterAll: HookFn;
}
```

The `describe` and `it` functions may optionally expose `.skip`, `.skipIf(condition)`, and `.sequential` modifiers. When a modifier is not available the suite gracefully degrades (e.g. `describe.sequential` falls back to `describe`).

### `SandboxInstance`

Your sandbox class must implement the `SandboxInstance` interface, which extends `SandboxBackendProtocol` from `deepagents`:

```ts
interface SandboxInstance extends SandboxBackendProtocol {
  readonly isRunning: boolean;
  uploadFiles(
    files: Array<[string, Uint8Array]>,
  ): MaybePromise<FileUploadResponse[]>;
  downloadFiles(paths: string[]): MaybePromise<FileDownloadResponse[]>;
  initialize?(): Promise<void>;
}
```

The key difference from the base protocol is that `uploadFiles` and `downloadFiles` are **required** (they are optional in `SandboxBackendProtocol`).

## What gets tested

| Suite                 | What it covers                                                                               |
| --------------------- | -------------------------------------------------------------------------------------------- |
| **Lifecycle**         | `create`, `isRunning`, `close`, two-step `initialize`                                        |
| **Command execution** | `echo`, exit codes, multiline output, stderr, env vars, non-existent commands                |
| **File operations**   | `uploadFiles`, `downloadFiles`, round-trip integrity                                         |
| **write()**           | New files, parent directory creation, overwrite, special characters, unicode, long content   |
| **read()**            | Basic read, non-existent path, `offset`, `limit`, `offset + limit`, unicode, chunked reads   |
| **edit()**            | Single/multi occurrence, `replaceAll`, not-found handling, special chars, multiline, unicode |
| **lsInfo()**          | Directory listing, empty dirs, hidden files, large directories, absolute paths               |
| **grepRaw()**         | Pattern search, glob filters, case sensitivity, nested directories, unicode                  |
| **globInfo()**        | Wildcards, recursive patterns, extension filters, character classes, deeply nested           |
| **Initial files**     | Basic seeding, nested paths, empty files                                                     |
| **Integration**       | End-to-end write → read → edit workflows, complex directory operations, error handling       |

## Sandbox reuse strategy

To avoid spinning up too many sandbox instances (which can hit provider concurrency limits), the test suite uses a **single shared sandbox** for the majority of tests. Only two kinds of tests create temporary instances:

- **Lifecycle** tests that verify `close` and two-step initialization
- **Initial files** tests that require a fresh sandbox with pre-seeded content

These temporary sandboxes are torn down immediately, so the concurrent sandbox count never exceeds **2**.

## Retry helper

The package exports a `withRetry` utility for working around transient sandbox creation failures (e.g. provider concurrency limits):

```ts
import { withRetry } from "@langchain/sandbox-standard-tests/vitest";

const sandbox = await withRetry(
  () => MySandbox.create({ memoryMb: 512 }),
  5, // max attempts (default: 5)
  15_000, // delay between attempts in ms (default: 15 000)
);
```

## Real-world examples

### Remote provider (Modal)

```ts
import {
  sandboxStandardTests,
  withRetry,
} from "@langchain/sandbox-standard-tests/vitest";
import { ModalSandbox } from "./sandbox.js";

const hasCredentials = !!(
  process.env.MODAL_TOKEN_ID && process.env.MODAL_TOKEN_SECRET
);

sandboxStandardTests({
  name: "ModalSandbox",
  skip: !hasCredentials,
  timeout: 180_000,
  createSandbox: (opts) =>
    ModalSandbox.create({ imageName: "alpine:3.21", ...opts }),
  createUninitializedSandbox: () =>
    new ModalSandbox({ imageName: "alpine:3.21" }),
  closeSandbox: (sb) => sb.close(),
  resolvePath: (name) => `/tmp/${name}`,
});
```

### Sequential execution (Deno Deploy)

```ts
import { sandboxStandardTests } from "@langchain/sandbox-standard-tests/vitest";
import { DenoSandbox } from "./sandbox.js";

sandboxStandardTests({
  name: "DenoSandbox",
  skip: !process.env.DENO_DEPLOY_TOKEN,
  sequential: true,
  timeout: 120_000,
  createSandbox: (opts) => DenoSandbox.create({ memoryMb: 768, ...opts }),
  createUninitializedSandbox: () => new DenoSandbox({ memoryMb: 768 }),
  closeSandbox: (sb) => sb.close(),
  resolvePath: (name) => `/home/app/${name}`,
});
```

### Local provider (Node VFS)

```ts
import { sandboxStandardTests } from "@langchain/sandbox-standard-tests/vitest";
import { VfsSandbox } from "./sandbox.js";

sandboxStandardTests({
  name: "VfsSandbox",
  skip: process.platform === "win32",
  timeout: 30_000,
  createSandbox: (opts) => VfsSandbox.create(opts),
  closeSandbox: (sb) => sb.stop(),
  resolvePath: (name) => name,
});
```

### Custom runner (Bun)

```ts
import { sandboxStandardTests } from "@langchain/sandbox-standard-tests";
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { MySandbox } from "./sandbox.js";

sandboxStandardTests({
  name: "MySandbox",
  runner: { describe, it, expect, beforeAll, afterAll },
  createSandbox: (opts) => MySandbox.create(opts),
  closeSandbox: (sb) => sb.close(),
  resolvePath: (name) => `/tmp/${name}`,
});
```

## Adding provider-specific tests

After calling `sandboxStandardTests`, you can add provider-specific tests in the same file using standard Vitest `describe` / `it` blocks:

```ts
sandboxStandardTests({
  /* ... */
});

describe("MySandbox Provider-Specific Tests", () => {
  it("should support custom image types", async () => {
    const sb = await MySandbox.create({ image: "python:3.12" });
    const result = await sb.execute("python --version");
    expect(result.exitCode).toBe(0);
    await sb.close();
  });
});
```

## License

MIT
- [Community](/javascript/langchain-community)
  # 🦜️🧑‍🤝‍🧑 LangChain Community

[![CI](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml) ![npm](https://img.shields.io/npm/dm/@langchain/community) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

## Quick Install

```bash
pnpm install @langchain/community
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate field to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/community": "^0.0.0",
    "@langchain/core": "^0.3.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## 🤔 What is this?

LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application.

![LangChain Stack](https://raw.githubusercontent.com/langchain-ai/langchainjs/418a3fc1ff2bd4dc73ba52414ff8ec6710bd5572/libs/langchain-community/../../docs/core_docs/static/svg/langchain_stack_062024.svg)

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see [here](https://github.com/langchain-ai/langchainjs/blob/418a3fc1ff2bd4dc73ba52414ff8ec6710bd5572/libs/langchain-community/../../CONTRIBUTING.md).
- [Anthropic](/javascript/langchain-anthropic)
  # @langchain/anthropic

This package contains the LangChain.js integrations for Anthropic through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/anthropic @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/anthropic": "^0.0.9",
    "@langchain/core": "^0.3.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Chat Models

This package contains the `ChatAnthropic` class, which is the recommended way to interface with the Anthropic series of models.

To use, install the requirements, and configure your environment.

```bash
export ANTHROPIC_API_KEY=your-api-key
```

Then initialize

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});
const response = await model.invoke({
  role: "user",
  content: "Hello world!",
});
```

### Streaming

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  model: "claude-3-sonnet-20240229",
});
const response = await model.stream({
  role: "user",
  content: "Hello world!",
});
```

## Tools

This package provides LangChain-compatible wrappers for Anthropic's built-in tools. These tools can be bound to `ChatAnthropic` using `bindTools()` or any [`ReactAgent`](https://docs.langchain.com/oss/javascript/langchain/agents).

### Memory Tool

The memory tool (`memory_20250818`) enables Claude to store and retrieve information across conversations through a memory file directory. Claude can create, read, update, and delete files that persist between sessions, allowing it to build knowledge over time without keeping everything in the context window.

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";

// Create a simple in-memory file store (or use your own persistence layer)
const files = new Map<string, string>();

const memory = tools.memory_20250818({
  execute: async (command) => {
    switch (command.command) {
      case "view":
        if (!command.path || command.path === "/") {
          return Array.from(files.keys()).join("\n") || "Directory is empty.";
        }
        return (
          files.get(command.path) ?? `Error: File not found: ${command.path}`
        );
      case "create":
        files.set(command.path!, command.file_text ?? "");
        return `Successfully created file: ${command.path}`;
      case "str_replace":
        const content = files.get(command.path!);
        if (content && command.old_str) {
          files.set(
            command.path!,
            content.replace(command.old_str, command.new_str ?? "")
          );
        }
        return `Successfully replaced text in: ${command.path}`;
      case "delete":
        files.delete(command.path!);
        return `Successfully deleted: ${command.path}`;
      // Handle other commands: insert, rename
      default:
        return `Unknown command`;
    }
  },
});

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

const llmWithMemory = llm.bindTools([memory]);

const response = await llmWithMemory.invoke(
  "Remember that my favorite programming language is TypeScript"
);
```

For more information, see [Anthropic's Memory Tool documentation](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/memory-tool).

### Web Search Tool

The web search tool (`webSearch_20250305`) gives Claude direct access to real-time web content, allowing it to answer questions with up-to-date information beyond its knowledge cutoff. Claude automatically cites sources from search results as part of its answer.

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

// Basic usage
const response = await llm.invoke("What is the weather in NYC?", {
  tools: [tools.webSearch_20250305()],
});
```

The web search tool supports several configuration options:

```typescript
const response = await llm.invoke("Latest news about AI?", {
  tools: [
    tools.webSearch_20250305({
      // Maximum number of times the tool can be used in the API request
      maxUses: 5,
      // Only include results from these domains
      allowedDomains: ["reuters.com", "bbc.com"],
      // Or block specific domains (cannot be used with allowedDomains)
      // blockedDomains: ["example.com"],
      // Provide user location for more relevant results
      userLocation: {
        type: "approximate",
        city: "San Francisco",
        region: "California",
        country: "US",
        timezone: "America/Los_Angeles",
      },
    }),
  ],
});
```

For more information, see [Anthropic's Web Search Tool documentation](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool).

### Web Fetch Tool

The web fetch tool (`webFetch_20250910`) allows Claude to retrieve full content from specified web pages and PDF documents. Claude can only fetch URLs that have been explicitly provided by the user or that come from previous web search or web fetch results.

> **⚠️ Security Warning:** Enabling the web fetch tool in environments where Claude processes untrusted input alongside sensitive data poses data exfiltration risks. We recommend only using this tool in trusted environments or when handling non-sensitive data.

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

// Basic usage - fetch content from a URL
const response = await llm.invoke(
  "Please analyze the content at https://example.com/article",
  { tools: [tools.webFetch_20250910()] }
);
```

The web fetch tool supports several configuration options:

```typescript
const response = await llm.invoke(
  "Summarize this research paper: https://arxiv.org/abs/2024.12345",
  {
    tools: [
      tools.webFetch_20250910({
        // Maximum number of times the tool can be used in the API request
        maxUses: 5,
        // Only fetch from these domains
        allowedDomains: ["arxiv.org", "example.com"],
        // Or block specific domains (cannot be used with allowedDomains)
        // blockedDomains: ["example.com"],
        // Enable citations for fetched content (optional, unlike web search)
        citations: { enabled: true },
        // Maximum content length in tokens (helps control token usage)
        maxContentTokens: 50000,
      }),
    ],
  }
);
```

You can combine web fetch with web search for comprehensive information gathering:

```typescript
import { tools } from "@langchain/anthropic";

const response = await llm.invoke(
  "Find recent articles about quantum computing and analyze the most relevant one",
  {
    tools: [
      tools.webSearch_20250305({ maxUses: 3 }),
      tools.webFetch_20250910({ maxUses: 5, citations: { enabled: true } }),
    ],
  }
);
```

For more information, see [Anthropic's Web Fetch Tool documentation](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-fetch-tool).

### Tool Search Tools

The tool search tools enable Claude to work with hundreds or thousands of tools by dynamically discovering and loading them on-demand. This is useful when you have a large number of tools but don't want to load them all into the context window at once.

There are two variants:

- **`toolSearchRegex_20251119`** - Claude constructs regex patterns (using Python's `re.search()` syntax) to search for tools
- **`toolSearchBM25_20251119`** - Claude uses natural language queries to search for tools using the BM25 algorithm

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";
import { tool } from "langchain";
import { z } from "zod";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

// Create tools with defer_loading to make them discoverable via search
const getWeather = tool(
  async (input: { location: string }) => {
    return `Weather in ${input.location}: Sunny, 72°F`;
  },
  {
    name: "get_weather",
    description: "Get the weather at a specific location",
    schema: z.object({
      location: z.string(),
    }),
    extras: { defer_loading: true },
  }
);

const getNews = tool(
  async (input: { topic: string }) => {
    return `Latest news about ${input.topic}...`;
  },
  {
    name: "get_news",
    description: "Get the latest news about a topic",
    schema: z.object({
      topic: z.string(),
    }),
    extras: { defer_loading: true },
  }
);

// Claude will search and discover tools as needed
const response = await llm.invoke("What is the weather in San Francisco?", {
  tools: [tools.toolSearchRegex_20251119(), getWeather, getNews],
});
```

Using the BM25 variant for natural language search:

```typescript
import { tools } from "@langchain/anthropic";

const response = await llm.invoke("What is the weather in San Francisco?", {
  tools: [tools.toolSearchBM25_20251119(), getWeather, getNews],
});
```

For more information, see [Anthropic's Tool Search documentation](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/tool-search-tool).

### Text Editor Tool

The text editor tool (`textEditor_20250728`) enables Claude to view and modify text files, helping debug, fix, and improve code or other text documents. Claude can directly interact with files, providing hands-on assistance rather than just suggesting changes.

Available commands:

- `view` - Examine file contents or list directory contents
- `str_replace` - Replace specific text in a file
- `create` - Create a new file with specified content
- `insert` - Insert text at a specific line number

```typescript
import fs from "node:fs";
import { ChatAnthropic, tools } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

const textEditor = tools.textEditor_20250728({
  async execute(args) {
    switch (args.command) {
      case "view":
        const content = fs.readFileSync(args.path, "utf-8");
        // Return with line numbers for Claude to reference
        return content
          .split("\n")
          .map((line, i) => `${i + 1}: ${line}`)
          .join("\n");
      case "str_replace":
        let fileContent = fs.readFileSync(args.path, "utf-8");
        fileContent = fileContent.replace(args.old_str, args.new_str);
        fs.writeFileSync(args.path, fileContent);
        return "Successfully replaced text.";
      case "create":
        fs.writeFileSync(args.path, args.file_text);
        return `Successfully created file: ${args.path}`;
      case "insert":
        const lines = fs.readFileSync(args.path, "utf-8").split("\n");
        lines.splice(args.insert_line, 0, args.new_str);
        fs.writeFileSync(args.path, lines.join("\n"));
        return `Successfully inserted text at line ${args.insert_line}`;
      default:
        return "Unknown command";
    }
  },
  // Optional: limit file content length when viewing
  maxCharacters: 10000,
});

const llmWithEditor = llm.bindTools([textEditor]);

const response = await llmWithEditor.invoke(
  "There's a syntax error in my primes.py file. Can you help me fix it?"
);
```

For more information, see [Anthropic's Text Editor Tool documentation](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool).

### Computer Use Tool

The computer use tools enable Claude to interact with desktop environments through screenshot capture, mouse control, and keyboard input for autonomous desktop interaction.

> **⚠️ Security Warning:** Computer use is a beta feature with unique risks. Use a dedicated virtual machine or container with minimal privileges. Avoid giving access to sensitive data.

There are two variants:

- **`computer_20251124`** - For Claude Opus 4.5 (includes zoom capability)
- **`computer_20250124`** - For Claude 4 and Claude 3.7 models

Available actions:

- `screenshot` - Capture the current screen
- `left_click`, `right_click`, `middle_click` - Mouse clicks at coordinates
- `double_click`, `triple_click` - Multi-click actions
- `left_click_drag` - Click and drag operations
- `left_mouse_down`, `left_mouse_up` - Granular mouse control
- `scroll` - Scroll the screen
- `type` - Type text
- `key` - Press keyboard keys/shortcuts
- `mouse_move` - Move the cursor
- `hold_key` - Hold a key while performing other actions
- `wait` - Wait for a specified duration
- `zoom` - View specific screen regions at full resolution (Claude Opus 4.5 only)

```typescript
import {
  ChatAnthropic,
  tools,
  type Computer20250124Action,
} from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

const computer = tools.computer_20250124({
  // Required: specify display dimensions
  displayWidthPx: 1024,
  displayHeightPx: 768,
  // Optional: X11 display number
  displayNumber: 1,
  execute: async (action: Computer20250124Action) => {
    switch (action.action) {
      case "screenshot":
      // Capture and return base64-encoded screenshot
      // ...
      case "left_click":
      // Click at the specified coordinates
      // ...
      // ...
    }
  },
});

const llmWithComputer = llm.bindTools([computer]);

const response = await llmWithComputer.invoke(
  "Save a picture of a cat to my desktop."
);
```

For Claude Opus 4.5 with zoom support:

```typescript
import { tools } from "@langchain/anthropic";

const computer = tools.computer_20251124({
  displayWidthPx: 1920,
  displayHeightPx: 1080,
  // Enable zoom for detailed screen region inspection
  enableZoom: true,
  execute: async (action) => {
    // Handle actions including "zoom" for Claude Opus 4.5
    // ...
  },
});
```

For more information, see [Anthropic's Computer Use documentation](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/computer-use).

### Code Execution Tool

The code execution tool (`codeExecution_20250825`) allows Claude to run Bash commands and manipulate files in a secure, sandboxed environment. Claude can analyze data, create visualizations, perform calculations, and process files.

When this tool is provided, Claude automatically gains access to:

- **Bash commands** - Execute shell commands for system operations
- **File operations** - Create, view, and edit files directly

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

// Basic usage - calculations and data analysis
const response = await llm.invoke(
  "Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]",
  { tools: [tools.codeExecution_20250825()] }
);

// File operations and visualization
const response2 = await llm.invoke(
  "Create a matplotlib visualization of sales data and save it as chart.png",
  { tools: [tools.codeExecution_20250825()] }
);
```

Container reuse for multi-step workflows:

```typescript
// First request - creates a container
const response1 = await llm.invoke("Write a random number to /tmp/number.txt", {
  tools: [tools.codeExecution_20250825()],
});

// Extract container ID from response for reuse
const containerId = response1.response_metadata?.container?.id;

// Second request - reuse container to access the file
const response2 = await llm.invoke(
  "Read /tmp/number.txt and calculate its square",
  {
    tools: [tools.codeExecution_20250825()],
    container: containerId,
  }
);
```

For more information, see [Anthropic's Code Execution Tool documentation](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool).

### Bash Tool

The bash tool (`bash_20250124`) enables shell command execution in a persistent bash session. Unlike the sandboxed code execution tool, this tool requires you to provide your own execution environment.

> **⚠️ Security Warning:** The bash tool provides direct system access. Implement safety measures such as running in isolated environments (Docker/VM), command filtering, and resource limits.

The bash tool provides:

- **Persistent bash session** - Maintains state between commands
- **Shell command execution** - Run any shell command
- **Environment access** - Access to environment variables and working directory
- **Command chaining** - Support for pipes, redirects, and scripting

Available commands:

- Execute a command: `{ command: "ls -la" }`
- Restart the session: `{ restart: true }`

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";
import { execSync } from "child_process";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

const bash = tools.bash_20250124({
  execute: async (args) => {
    if (args.restart) {
      // Reset session state
      return "Bash session restarted";
    }
    try {
      const output = execSync(args.command, {
        encoding: "utf-8",
        timeout: 30000,
      });
      return output;
    } catch (error) {
      return `Error: ${(error as Error).message}`;
    }
  },
});

const llmWithBash = llm.bindTools([bash]);

const response = await llmWithBash.invoke(
  "List all Python files in the current directory"
);

// Process tool calls and execute commands
console.log(response.tool_calls?.[0].name); // "bash"
console.log(response.tool_calls?.[0].args.command); // "ls -la *.py"
```

For more information, see [Anthropic's Bash Tool documentation](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/bash-tool).

### MCP Toolset

The MCP toolset (`mcpToolset_20251120`) enables Claude to connect to remote MCP (Model Context Protocol) servers directly from the Messages API without implementing a separate MCP client. This allows Claude to use tools provided by MCP servers.

Key features:

- **Direct API integration** - Connect to MCP servers without implementing an MCP client
- **Tool calling support** - Access MCP tools through the Messages API
- **Flexible tool configuration** - Enable all tools, allowlist specific tools, or denylist unwanted tools
- **Per-tool configuration** - Configure individual tools with custom settings
- **OAuth authentication** - Support for OAuth Bearer tokens for authenticated servers
- **Multiple servers** - Connect to multiple MCP servers in a single request

```typescript
import { ChatAnthropic, tools } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  model: "claude-sonnet-4-5-20250929",
});

// Basic usage - enable all tools from an MCP server
const response = await llm.invoke("What tools do you have available?", {
  mcp_servers: [
    {
      type: "url",
      url: "https://example-server.modelcontextprotocol.io/sse",
      name: "example-mcp",
      authorization_token: "YOUR_TOKEN",
    },
  ],
  tools: [tools.mcpToolset_20251120({ serverName: "example-mcp" })],
});
```

**Allowlist pattern** - Enable only specific tools:

```typescript
const response = await llm.invoke("Search for events", {
  mcp_servers: [
    {
      type: "url",
      url: "https://calendar.example.com/sse",
      name: "google-calendar-mcp",
      authorization_token: "YOUR_TOKEN",
    },
  ],
  tools: [
    tools.mcpToolset_20251120({
      serverName: "google-calendar-mcp",
      // Disable all tools by default
      defaultConfig: { enabled: false },
      // Explicitly enable only these tools
      configs: {
        search_events: { enabled: true },
        create_event: { enabled: true },
      },
    }),
  ],
});
```

**Denylist pattern** - Disable specific tools:

```typescript
const response = await llm.invoke("List my events", {
  mcp_servers: [
    {
      type: "url",
      url: "https://calendar.example.com/sse",
      name: "google-calendar-mcp",
      authorization_token: "YOUR_TOKEN",
    },
  ],
  tools: [
    tools.mcpToolset_20251120({
      serverName: "google-calendar-mcp",
      // All tools enabled by default, just disable dangerous ones
      configs: {
        delete_all_events: { enabled: false },
        share_calendar_publicly: { enabled: false },
      },
    }),
  ],
});
```

**Multiple MCP servers**:

```typescript
const response = await llm.invoke("Use tools from both servers", {
  mcp_servers: [
    {
      type: "url",
      url: "https://mcp.example1.com/sse",
      name: "mcp-server-1",
      authorization_token: "TOKEN1",
    },
    {
      type: "url",
      url: "https://mcp.example2.com/sse",
      name: "mcp-server-2",
      authorization_token: "TOKEN2",
    },
  ],
  tools: [
    tools.mcpToolset_20251120({ serverName: "mcp-server-1" }),
    tools.mcpToolset_20251120({
      serverName: "mcp-server-2",
      defaultConfig: { deferLoading: true },
    }),
  ],
});
```

**With Tool Search** - Use deferred loading for on-demand tool discovery:

```typescript
const response = await llm.invoke("Find and use the right tool", {
  mcp_servers: [
    {
      type: "url",
      url: "https://example.com/sse",
      name: "example-mcp",
    },
  ],
  tools: [
    tools.toolSearchRegex_20251119(),
    tools.mcpToolset_20251120({
      serverName: "example-mcp",
      defaultConfig: { deferLoading: true },
    }),
  ],
});
```

For more information, see [Anthropic's MCP Connector documentation](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector).

## Development

To develop the Anthropic package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/anthropic
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
pnpm test
pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.

## Publishing

After running `pnpm build`, publish a new version with:

```bash
npm publish
```
- [AWS](/javascript/langchain-aws)
  # @langchain/aws

This package contains the LangChain.js integrations for AWS through their SDK.

## Installation

```bash
npm install @langchain/aws
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/aws": "^0.0.1",
    "@langchain/core": "^0.3.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Chat Models

This package contains the `ChatBedrockConverse` class, which is the recommended way to interface with the AWS Bedrock Converse series of models.

To use, install the requirements, and configure your environment following the traditional authentication methods.

```bash
export BEDROCK_AWS_REGION=
export BEDROCK_AWS_SECRET_ACCESS_KEY=
export BEDROCK_AWS_ACCESS_KEY_ID=
```

Alternatively, set the `AWS_BEARER_TOKEN_BEDROCK` environment variable locally for API Key authentication. For additional API key details, refer to [docs](https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys.html).

```bash
export BEDROCK_AWS_REGION=
export AWS_BEARER_TOKEN_BEDROCK=
```

Then initialize

```typescript
import { ChatBedrockConverse } from "@langchain/aws";

const model = new ChatBedrockConverse({
  region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
  credentials: {
    secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
    accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
  },
});

const response = await model.invoke(new HumanMessage("Hello world!"));
```

### Service Tiers

AWS Bedrock supports service tiers that control latency, cost, and capacity. You can set the tier at construction time or per-call.

- Supported values: `priority`, `default`, `flex`, `reserved`.
- See AWS docs: https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html

Set at construction time:

```typescript
import { ChatBedrockConverse } from "@langchain/aws";

const llm = new ChatBedrockConverse({
  region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
  credentials: {
    secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
    accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
  },
  serviceTier: "priority",
});
```

Override per invocation (takes precedence over constructor):

```typescript
const res = await llm.invoke("Translate this", { serviceTier: "flex" });
```

`serviceTier` affects the request sent to Bedrock Converse (`{ serviceTier: { type: "..." } }`). If not provided, AWS uses the default tier.

### Using Application Inference Profiles

AWS Bedrock [Application Inference Profiles](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-create.html) allow you to define custom endpoints that can route requests across regions or manage traffic for your models.

You can use an inference profile ARN by passing it to the `applicationInferenceProfile` parameter. When provided, this ARN will be used for the actual inference calls instead of the model ID:

```typescript
import { ChatBedrockConverse } from "@langchain/aws";

const model = new ChatBedrockConverse({
  region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
  model: "anthropic.claude-3-haiku-20240307-v1:0",
  applicationInferenceProfile:
    "arnbedrock123456789102:application-inference-profile/fm16bt65tzgx",
  credentials: {
    secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
    accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
  },
});

const response = await model.invoke(new HumanMessage("Hello world!"));
```

**Important:** You must still provide the `model` parameter with the actual model ID (e.g., `"anthropic.claude-3-haiku-20240307-v1:0"`), even when using an inference profile. This ensures proper metadata tracking in tools like LangSmith, including accurate cost and latency measurements per model. The `applicationInferenceProfile` ARN will override the model ID only for the actual inference API calls.

> **Note:** AWS does not currently provide an API to programmatically retrieve the underlying model from an inference profile ARN, so it's the user's responsibility to ensure the `model` parameter matches the model configured in the inference profile.

### Streaming

```typescript
import { ChatBedrockConverse } from "@langchain/aws";

const model = new ChatBedrockConverse({
  region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
  credentials: {
    secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
    accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
  },
});

const response = await model.stream(new HumanMessage("Hello world!"));
```

## Development

To develop the AWS package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/aws
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.

## Publishing

After running `pnpm build`, publish a new version with:

```bash
$ npm publish
```
- [Azure CosmosDB](/javascript/langchain-azure-cosmosdb)
  # @langchain/azure-cosmosdb

This package contains the [Azure CosmosDB](https://learn.microsoft.com/azure/cosmos-db/) vector store integrations.

Learn more about how to use this package in the LangChain documentation:

- [Azure CosmosDB for NoSQL](https://js.langchain.com/docs/integrations/vector_stores/azure_cosmosdb_nosql)
- [Azure DocumentDB](https://js.langchain.com/docs/integrations/vector_stores/azure_documentdb)

## Installation

```bash npm2yarn
npm install @langchain/azure-cosmosdb @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/azure-cosmosdb": "^0.2.5"
  },
  "resolutions": {
    "@langchain/core": "0.3.0"
  },
  "overrides": {
    "@langchain/core": "0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Usage

```typescript
import { AzureCosmosDBNoSQLVectorStore } from "@langchain/azure-cosmosdb";

const store = await AzureCosmosDBNoSQLVectorStore.fromDocuments(
  ["Hello, World!"],
  new OpenAIEmbeddings(),
  {
    databaseName: "langchain",
    containerName: "documents",
  }
);

const resultDocuments = await store.similaritySearch("hello");
console.log(resultDocuments[0].pageContent);
```
- [Azure Dynamic Sessions](/javascript/langchain-azure-dynamic-sessions)
  # @langchain/azure-dynamic-sessions

This package contains the [Azure Container Apps dynamic sessions](https://learn.microsoft.com/azure/container-apps/sessions) tool integration.

Learn more about how to use this tool in the [LangChain documentation](https://js.langchain.com/docs/integrations/tools/azure_dynamic_sessions).

## Installation

```bash npm2yarn
npm install @langchain/azure-dynamic-sessions @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Tool usage

```typescript
import { SessionsPythonREPLTool } from "@langchain/azure-dynamic-sessions";

const tool = new SessionsPythonREPLTool({
  poolManagementEndpoint:
    process.env.AZURE_CONTAINER_APP_SESSION_POOL_MANAGEMENT_ENDPOINT || "",
});

const result = await tool.invoke("print('Hello, World!')\n1+2");

console.log(result);

// {
//   stdout: "Hello, World!\n",
//   stderr: "",
//   result: 3,
// }
```
- [Baidu Qianfan](/javascript/langchain-baidu-qianfan)
  # @langchain/baidu-qianfan

This package contains the LangChain.js integrations for Baidu Qianfan via the qianfan/sdk package.


## Installation

```bash npm2yarn
npm install @langchain/baidu-qianfan @langchain/core
```

## Chat models

This package adds support for Qianfan chat model inference.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export QIANFAN_AK=""
export QIANFAN_SK=""
export QIANFAN_ACCESS_KEY=""
export QIANFAN_SECRET_KEY=""
```

```typescript
import { ChatBaiduQianfan } from "@langchain/baidu-qianfan";
import { HumanMessage } from "@langchain/core/messages";

const chat = new ChatBaiduQianfan({
    model: 'ERNIE-Lite-8K'
});
const message = new HumanMessage("北京天气");

const res = await chat.invoke([message]);
```

```typescript
import { BaiduQianfanEmbeddings } from "@langchain/baidu-qianfan";

const embeddings = new BaiduQianfanEmbeddings();
const res = await embeddings.embedQuery("Introduce the city Beijing");
```
- [Cerebras](/javascript/langchain-cerebras)
  # @langchain/cerebras

This package contains the LangChain.js integrations for Cerebras via the `@cerebras/cerebras_cloud_sdk` package.

## Installation

```bash npm2yarn
npm install @langchain/cerebras @langchain/core
```

## Chat models

This package adds support for Cerebras chat model inference.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export CEREBRAS_API_KEY=
```

```typescript
import { ChatCerebras } from "@langchain/cerebras";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatCerebras({
  apiKey: process.env.CEREBRAS_API_KEY, // Default value.
});

const message = new HumanMessage("What color is the sky?");

const res = await model.invoke([message]);
```

## Development

To develop the `@langchain/cerebras` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/cerebras
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Cloudflare](/javascript/langchain-cloudflare)
  # @langchain/cloudflare

This package contains the LangChain.js integrations for Cloudflare through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/cloudflare @langchain/core
```

## Development

To develop the Cloudflare package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/cloudflare
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Cohere](/javascript/langchain-cohere)
  # @langchain/cohere

This package contains the LangChain.js integrations for Cohere through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/cohere @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate field to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/cohere": "^0.0.1",
    "@langchain/core": "^0.3.0"
  },
  "resolutions": {
    "@langchain/core": "0.3.0"
  },
  "overrides": {
    "@langchain/core": "0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Chat Models

This package contains the `ChatCohere` class, which is the recommended way to interface with the Cohere series of models.

To use, install the requirements, and configure your environment.

```bash
export COHERE_API_KEY=your-api-key
```

Then initialize

```typescript
import { HumanMessage } from "@langchain/core/messages";
import { ChatCohere } from "@langchain/cohere";

const model = new ChatCohere({
  apiKey: process.env.COHERE_API_KEY,
});
const response = await model.invoke([new HumanMessage("Hello world!")]);
```

### Streaming

```typescript
import { HumanMessage } from "@langchain/core/messages";
import { ChatCohere } from "@langchain/cohere";

const model = new ChatCohere({
  apiKey: process.env.COHERE_API_KEY,
});
const response = await model.stream([new HumanMessage("Hello world!")]);
```

## Embeddings

This package also adds support for `CohereEmbeddings` embeddings model.

```typescript
import { CohereEmbeddings } from "@langchain/cohere";

const embeddings = new CohereEmbeddings({
  apiKey: process.env.COHERE_API_KEY,
});
const res = await embeddings.embedQuery("Hello world");
```

## Development

To develop the `@langchain/cohere` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/cohere
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [DeepSeek](/javascript/langchain-deepseek)
  # @langchain/deepseek

This package contains the LangChain.js integrations for DeepSeek.

## Installation

```bash npm2yarn
npm install @langchain/deepseek @langchain/core
```

## Chat models

This package adds support for DeepSeek's chat model inference.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export DEEPSEEK_API_KEY=
```

```typescript
import { ChatDeepSeek } from "@langchain/deepseek";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatDeepSeek({
  apiKey: process.env.DEEPSEEK_API_KEY, // Default value.
  model: "<model_name>",
});

const res = await model.invoke([
  {
    role: "user",
    content: message,
  },
]);
```

## Development

To develop the `@langchain/deepseek` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/deepseek
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Exa](/javascript/langchain-exa)
  # @langchain/exa

This package contains the LangChain.js integrations for exa through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/exa @langchain/core
```

## Development

To develop the exa package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/exa
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Google Cloud SQL PG](/javascript/langchain-google-cloud-sql-pg)
  # @langchain/google-cloud-sql-pg

The LangChain package for CloudSQL for Postgres provides a way to connect to Cloud SQL instances from the LangChain ecosystem.


Main features:
* The package creates a shared connection pool to connect to Google Cloud Postgres databases utilizing different ways for authentication such as IAM, user and password authorization.
* Store metadata in columns instead of JSON, resulting in significant performance improvements.

##  Before you begin

In order to use this package, you first need to go through the following steps:
1.  [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
2.  [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
3.  [Enable the Cloud SQL Admin API.](https://cloud.google.com/sql/docs/postgres/admin-api)
4.  [Setup Authentication.](https://cloud.google.com/docs/authentication)

### Installation

```bash
$ pnpm install @langchain/google-cloud-sql-pg
```

## Example usage

### PostgresEngine usage

Before you use the PostgresVectorStore you will need to create a postgres connection through the PostgresEngine interface.

```javascript
import { Column, PostgresEngine, PostgresEngineArgs, PostgresVectorStore, VectorStoreTableArgs } from "@langchain/google-cloud-sql-pg";
import { SyntheticEmbeddings } from "@langchain/core/utils/testing";

const pgArgs: PostgresEngineArgs = {
    user: "db-user",
    password: "password"
}

const engine: PostgresEngine = await PostgresEngine.fromInstance(
 "project-id",
 "region",
 "instance-name",
 "database-name",
 pgArgs
);

const vectorStoreTableArgs: VectorStoreTableArgs = {
  metadataColumns: [new Column("page", "TEXT"), new Column("source", "TEXT")],
};

await engine.initVectorstoreTable("my-table", 768, vectorStoreTableArgs);
const embeddingService = new SyntheticEmbeddings({ vectorSize: 768 });

```

-   You can pass the ipType, user, password and iamAccountEmail through the PostgresEngineArgs interface to the PostgresEngine creation.
-   You can pass the schemaName, contentColumn, embeddingColum, metadataColumns and others through the VectorStoreTableArgs interface to the init_vectorstore_table method.
-   Passing an empty object to these methods allows you to use the default values.

### Vector Store usage

Use a PostgresVectorStore to store embedded data and perform vector similarity search for Postgres.

```javascript
const pvectorArgs: PostgresVectorStoreArgs = {
    idColumn: "ID_COLUMN",
    contentColumn: "CONTENT_COLUMN",
    embeddingColumn: "EMBEDDING_COLUMN",
    metadataColumns: ["page", "source"]
}

const vectorStoreInstance = await PostgresVectorStore.initialize(engine, embeddingService, "my-table", pvectorArgs)
```
-   You can pass the schemaName, contentColumn, embeddingColumn, distanceStrategy and others through the PostgresVectorStoreArgs interface to the PostgresVectorStore creation.
-   Passing an empty object to these methods allows you to use the default values.

PostgresVectorStore interface methods available:

-   addDocuments
-   addVectors
-   similaritySearch
-   and others.

See the full [Vector Store](https://js.langchain.com/docs/integrations/vectorstores/google_cloudsql_pg) tutorial.

### Chat Message History usage

Use `PostgresChatMessageHistory` to store messages and provide conversation history in Postgres.

First, initialize the Chat History Table and then create the ChatMessageHistory instance.

```javascript
// ChatHistory table initialization
await engine.initChatHistoryTable("chat_message_table");

const historyInstance = await PostgresChatMessageHistory.initialize(engine, "test", "chat_message_table");
```

The create method of the PostgresChatMessageHistory receives the engine, the session Id and the table name.

PostgresChatMessageHistory interface methods available:

-   addMessage
-   addMessages
-   getMessages
-   clear

See the full [Chat Message History](https://js.langchain.com/docs/integrations/memory/google_cloudsql_pg) tutorial.

### Document Loader usage

Use a document loader to load data as LangChain `Document`s.

```typescript
import { PostgresEngine, PostgresLoader } from "@langchain/google-cloud-sql-pg";

const documentLoaderArgs: PostgresLoaderOptions = {
  tableName: "test_table_custom",
  contentColumns: [ "fruit_name", "variety"],
  metadataColumns: ["fruit_id", "quantity_in_stock", "price_per_unit", "organic"],
  format: "text"
};

const documentLoaderInstance = await PostgresLoader.initialize(PEInstance, documentLoaderArgs);

const documents = await documentLoaderInstance.load();
```

See the full [Loader](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/google_cloudsql_pg) tutorial.
- [Google Common](/javascript/langchain-google-common)
  # LangChain google-common

This package contains common resources to access Google AI/ML models
and other Google services in an auth-independent way.

AI/ML models are supported using the same interface no matter if
you are using the Google AI Studio-based version of the model or
the Google Cloud Vertex AI version of the model.

## Installation

This is **not** a stand-alone package since it does not contain code to do
authorization.

Instead, you should install _one_ of the following packages:

- @langchain/google-gauth
- @langchain/google-webauth

See those packages for details about installation.

This package does **not** depend on any Google library. Instead, it relies on
REST calls to Google endpoints. This is deliberate to reduce (sometimes
conflicting) dependencies and make it usable on platforms that do not include
file storage.

## Google services supported

- Gemini model through LLM and Chat classes (both through Google AI Studio and
  Google Cloud Vertex AI). Including:
  - Function/Tool support

## Known Limitations

### Tool/Function Schema Limitations

When using tools or functions with Gemini models, the following Zod schema features are not supported:

- **Discriminated Unions** (`z.discriminatedUnion()`) - Use flat objects with optional fields instead
- **Union Types** (`z.union()`) - Use separate optional fields
- **Positive Refinement** (`z.number().positive()`) - Automatically converted to `z.number().min(0.01)`

For detailed examples and workarounds, see the Tool Schema Limitations section in the @langchain/google-vertexai or @langchain/google-genai package documentation.

## TODO

Tasks and services still to be implemented:

- PaLM Vertex AI support and backwards compatibility
- PaLM MakerSuite support and backwards compatibility
- Semantic Retrieval / AQA model
- PaLM embeddings
- Gemini embeddings
- Multimodal embeddings
- Vertex AI Search
- Vertex AI Model Garden
  - Online prediction endpoints
    - Gemma
  - Google managed models
    - Claude
- AI Studio Tuned Models
- MakerSuite / Google Drive Hub
- Google Cloud Vector Store
- [Google GAuth](/javascript/langchain-google-gauth)
  # LangChain google-gauth

This package contains resources to access Google AI/ML models
and other Google services. Authorization to these services use
either an API Key or service account credentials that are either
stored on the local file system or are provided through the
Google Cloud Platform environment it is running on.

If you are running this on a platform where the credentials cannot
be provided this way, consider using the @langchain/google-webauth
package *instead*. You do not need to use both packages. See the
section on **Authorization** below.


## Installation

```bash
$ pnpm install @langchain/google-gauth
```


## Authorization

Authorization is either done through the use of an API Key, if it is
supported for the service you're using, or a Google Cloud Service
Account.

To handle service accounts, this package uses the `google-auth-library`
package, and you may wish to consult the documentation for that library
about how it does so. But in short, classes in this package will use
credentials from the first of the following that apply:

1. An API Key that is passed to the constructor using the `apiKey` attribute
2. Credentials that are passed to the constructor using the `authInfo` attribute
3. An API Key that is set in the environment variable `API_KEY`
4. The Service Account credentials that are saved in a file. The path to
   this file is set in the `GOOGLE_APPLICATION_CREDENTIALS` environment 
   variable.
5. If you are running on a Google Cloud Platform resource, or if you have
   logged in using `gcloud auth application-default login`, then the
   default credentials.
- [Google GenAI](/javascript/langchain-google-genai)
  # @langchain/google-genai

This package contains the LangChain.js integrations for Gemini through their generative-ai SDK.

## Installation

```bash npm2yarn
npm install @langchain/google-genai @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate field to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/google-genai": "^0.0.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Chat Models

This package contains the `ChatGoogleGenerativeAI` class, which is the recommended way to interface with the Google Gemini series of models.

To use, install the requirements, and configure your environment.

```bash
export GOOGLE_API_KEY=your-api-key
```

Then initialize

```typescript
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatGoogleGenerativeAI({
  model: "gemini-pro",
  maxOutputTokens: 2048,
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```

#### Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

```bash npm2yarn
pnpm install @langchain/core
```

```typescript
import fs from "fs";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage } from "@langchain/core/messages";

const vision = new ChatGoogleGenerativeAI({
  model: "gemini-pro-vision",
  maxOutputTokens: 2048,
});
const image = fs.readFileSync("./hotdog.jpg").toString("base64");
const input = [
  new HumanMessage({
    content: [
      {
        type: "text",
        text: "Describe the following image.",
      },
      {
        type: "image_url",
        image_url: `data:image/png;base64,${image}`,
      },
    ],
  }),
];

const res = await vision.invoke(input);
```

The value of `image_url` can be any of the following:

- A public image URL
- An accessible gcs file (e.g., "gcs://path/to/file.png")
- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)
- A PIL image

## Embeddings

This package also adds support for google's embeddings models.

```typescript
import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai";
import { TaskType } from "@google/generative-ai";

const embeddings = new GoogleGenerativeAIEmbeddings({
  modelName: "embedding-001", // 768 dimensions
  taskType: TaskType.RETRIEVAL_DOCUMENT,
  title: "Document title",
});

const res = await embeddings.embedQuery("OK Google");
```

## Development

To develop the Google GenAI package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/google-genai
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Google Vertex AI](/javascript/langchain-google-vertexai)
  # LangChain google-vertexai

This package contains resources to access Google AI/ML models
and other Google services via Vertex AI. Authorization to these
services use service account credentials stored on the local
file system or provided through the Google Cloud Platform
environment it is running on.

If you are running this on a platform where the credentials cannot
be provided this way, consider using the @langchain/google-vertexai-web
package _instead_. You do not need to use both packages. See the
section on **Authorization** below.

## Installation

```bash
$ pnpm install @langchain/google-vertexai
```

## Authorization

Authorization is done through a Google Cloud Service Account.

To handle service accounts, this package uses the `google-auth-library`
package, and you may wish to consult the documentation for that library
about how it does so. But in short, classes in this package will use
credentials from the first of the following that apply:

1. An API Key that is passed to the constructor using the `apiKey` attribute
2. Credentials that are passed to the constructor using the `authInfo` attribute
3. An API Key that is set in the environment variable `API_KEY`
4. The Service Account credentials that are saved in a file. The path to
   this file is set in the `GOOGLE_APPLICATION_CREDENTIALS` environment
   variable.
5. If you are running on a Google Cloud Platform resource, or if you have
   logged in using `gcloud auth application-default login`, then the
   default credentials.

## Tool Schema Limitations

When using tools with Gemini models through Vertex AI, be aware of the following Zod schema limitations:

### Unsupported Schema Features

1. **Discriminated Unions** - `.discriminatedUnion()` is not supported

   ```typescript
   // ❌ This will throw an error
   z.discriminatedUnion("type", [
     z.object({ type: z.literal("a"), value: z.string() }),
     z.object({ type: z.literal("b"), value: z.number() }),
   ]);

   // ✅ Use a flat object with optional fields instead
   z.object({
     type: z.enum(["a", "b"]),
     stringValue: z.string().optional(),
     numberValue: z.number().optional(),
   });
   ```

2. **Union Types** - `z.union()` is not supported

   ```typescript
   // ❌ This will throw an error
   z.union([z.string(), z.number()]);

   // ✅ Consider using separate optional fields
   z.object({
     stringValue: z.string().optional(),
     numberValue: z.number().optional(),
   });
   ```

3. **Positive Refinement** - `.positive()` is automatically converted

   ```typescript
   // ⚠️ This is automatically converted to .min(0.01)
   z.number().positive();

   // ✅ Prefer using .min() directly
   z.number().min(0.01);
   ```

### Error Messages

If you use unsupported schema features, you'll receive descriptive error messages:

- For union types: `"Gemini cannot handle union types (discriminatedUnion, anyOf, oneOf)"`
- For tool conversion failures: `"Failed to convert tool '[toolName]' schema for Gemini"`

### Best Practices

1. Use simple, flat object structures when possible
2. Replace discriminated unions with enums and optional fields
3. Use `.min()` instead of `.positive()` for number constraints
4. Test your tool schemas before deploying to production
- [Google Vertex AI Web](/javascript/langchain-google-vertexai-web)
  # LangChain google-vertexai-web

This package contains resources to access Google AI/ML models
and other Google services via Vertex AI. Authorization to these
services use either an API Key or service account credentials
that are included in an environment variable.

If you are running this on the Google Cloud Platform, or in a way
where service account credentials can be stored on a file system,
consider using the @langchain/google-vertexai
package *instead*. You do not need to use both packages. See the
section on **Authorization** below.


## Installation

```bash
$ pnpm install @langchain/google-vertexai-web
```


## Authorization

Authorization is done through a Google Cloud Service Account.

To handle service accounts, this package uses the `google-auth-library`
package, and you may wish to consult the documentation for that library
about how it does so. But in short, classes in this package will use
credentials from the first of the following that apply:

1. An API Key that is passed to the constructor using the `apiKey` attribute
2. Credentials that are passed to the constructor using the `authInfo` attribute
3. An API Key that is set in the environment variable `API_KEY`
4. The Service Account credentials that are saved directly into the
   `GOOGLE_WEB_CREDENTIALS`
5. The Service Account credentials that are saved directly into the
   `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` (deprecated)
- [Google WebAuth](/javascript/langchain-google-webauth)
  # LangChain google-webauth

This package contains resources to access Google AI/ML models
and other Google services. Authorization to these services use
either an API Key or service account credentials that are included
in an environment variable.

If you are running this on the Google Cloud Platform, or in a way
where service account credentials can be stored on a file system,
consider using the @langchain/google-gauth
package _instead_. You do not need to use both packages. See the
section on **Authorization** below.

## Installation

```bash
$ pnpm install @langchain/google-webauth
```

## Authorization

Authorization is either done through the use of an API Key, if it is
supported for the service you're using, or a Google Cloud Service
Account.

To handle service accounts, this package uses the `google-auth-library`
package, and you may wish to consult the documentation for that library
about how it does so. But in short, classes in this package will use
credentials from the first of the following that apply:

1. An API Key that is passed to the constructor using the `apiKey` attribute
2. Credentials that are passed to the constructor using the `authInfo` attribute
3. An API Key that is set in the environment variable `API_KEY`
4. The Service Account credentials that are saved directly into the
   `GOOGLE_WEB_CREDENTIALS`
5. The Service Account credentials that are saved directly into the
   `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` (deprecated)
- [Groq](/javascript/langchain-groq)
  # @langchain/groq

This package contains the LangChain.js integrations for Groq via the groq/sdk package.

## Installation

```bash npm2yarn
npm install @langchain/groq @langchain/core
```

## Chat models

This package adds support for Groq chat model inference.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export GROQ_API_KEY=
```

```typescript
import { ChatGroq } from "@langchain/groq";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatGroq({
  apiKey: process.env.GROQ_API_KEY, // Default value.
  model: "llama-3.3-70b-versatile",
});

const message = new HumanMessage("What color is the sky?");

const res = await model.invoke([message]);
```

## Development

To develop the `@langchain/groq` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/groq
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Mistral AI](/javascript/langchain-mistralai)
  # @langchain/mistralai

This package contains the LangChain.js integrations for Mistral through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/mistralai @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate field to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/mistralai": "^0.0.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Chat Models

This package contains the `ChatMistralAI` class, which is the recommended way to interface with the Mistral series of models.

To use, install the requirements, and configure your environment.

```bash
export MISTRAL_API_KEY=your-api-key
```

Then initialize

```typescript
import { ChatMistralAI } from "@langchain/mistralai";

const model = new ChatMistralAI({
  apiKey: process.env.MISTRAL_API_KEY,
  modelName: "mistral-small",
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```

### Streaming

```typescript
import { ChatMistralAI } from "@langchain/mistralai";

const model = new ChatMistralAI({
  apiKey: process.env.MISTRAL_API_KEY,
  modelName: "mistral-small",
});
const response = await model.stream(new HumanMessage("Hello world!"));
```

## Embeddings

This package also adds support for Mistral's embeddings model.

```typescript
import { MistralAIEmbeddings } from "@langchain/mistralai";

const embeddings = new MistralAIEmbeddings({
  apiKey: process.env.MISTRAL_API_KEY,
});
const res = await embeddings.embedQuery("Hello world");
```

## Development

To develop the Mistral package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/mistralai
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Mixedbread AI](/javascript/langchain-mixedbread-ai)
  # @langchain/mixedbread-ai

This package contains the LangChain.js integrations for the [Mixedbread AI API](https://mixedbread.ai/).

## Installation

```bash
npm install @langchain/mixedbread-ai
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/). If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of `@langchain/core`.

## Authentication

To use this package, you need a Mixedbread AI API key. You can obtain your API key by signing up at [Mixedbread AI](https://mixedbread.ai).

Either set the `MXBAI_API_KEY` environment variable to your Mixedbread AI API key, or pass it as the `apiKey` option to the constructor of the class you are using.

## Embeddings

This package provides access to the different embedding models provided by the Mixedbread AI API, such as the "mixedbread-ai/mxbai-embed-large-v1" model.

Learn more: [Embeddings API](https://mixedbread.ai/docs/embeddings)

```typescript
const embeddings = new MixedbreadAIEmbeddings({ apiKey: "your-api-key" });
const texts = ["Baking bread is fun", "I love baking"];
const result = await embeddings.embedDocuments(texts);
console.log(result);
```

## Reranking

This package provides access to the reranking API provided by Mixedbread AI. It allows you to rerank a list of documents based on a query. Available models include "mixedbread-ai/mxbai-rerank-large-v1".

Learn more: [Reranking API](https://mixedbread.ai/docs/reranking)

```typescript
const reranker = new MixedbreadAIReranker({ apiKey: "your-api-key" });
const documents = [
  { pageContent: "To bake bread you need flour" },
  { pageContent: "To bake bread you need yeast" },
];
const query = "What do you need to bake bread?";
const result = await reranker.compressDocuments(documents, query);
console.log(result);
```

## Development

To develop the `@langchain/mixedbread-ai` package, follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/mixedbread-ai
```

### Run tests

Test files should live within a `tests/` folder in the `src/` directory. Unit tests should end in `.test.ts` and integration tests should end in `.int.test.ts`:

```bash
pnpm test
pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entry points

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entry point.
- [MongoDB](/javascript/langchain-mongodb)
  # @langchain/mongodb

This package contains the LangChain.js integrations for MongoDB through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/mongodb @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate field to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/mongodb": "^0.0.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility.

## Development

To develop the MongoDB package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/mongodb
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

The tests in this package require an instance of MongoDB Atlas running, either running locally or as a remote Atlas cluster. A URI pointing to
an existing Atlas cluster can be provided to the tests by specifying the `MONGODB_ATLAS_URI` environment variable:

```bash
MONGODB_ATLAS_URI='<atlas URI>' pnpm test:int
```

If running against a remote Atlas cluster, the user must have readWrite permissions on the `langchain_test` database.

If no `MONGODB_ATLAS_URI` is provided, the test suite will attempt to launch an instance of local Atlas in a container using [testcontainers](https://testcontainers.com/). This requires a container engine, see the [testcontainer backing engine documentation](https://node.testcontainers.org/supported-container-runtimes/) for details.

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Nomic](/javascript/langchain-nomic)
  # @langchain/nomic

This package contains the LangChain.js integrations for Nomic via the @nomic-ai/atlas package.

## Installation

```bash npm2yarn
npm install @langchain/nomic @langchain/core
```

## Embeddings

This package adds support for Nomic embeddings.

Currently, they offer two embeddings models:

- `nomic-embed-text-v1`
- `nomic-embed-text-v1.5`

`nomic-embed-text-v1.5` allows for you to customize the number of dimensions returned. It defaults to the largest possible number of dimensions (768), or you can select 64, 128, 256, or 512.

Now set the necessary environment variable (or pass it in via the constructor):

```bash
export NOMIC_API_KEY=
```

```typescript
import { NomicEmbeddings } from "@langchain/nomic";

const nomicEmbeddings = new NomicEmbeddings({
  apiKey: process.env.NOMIC_API_KEY, // Default value.
  modelName: "nomic-embed-text-v1", // Default value.
});

const docs = [
  "hello world",
  "nomic embeddings!",
  "super special langchain integration package",
  "what color is the sky?",
];

const embeddings = await nomicEmbeddings.embedDocuments(docs);
```

## Development

To develop the `@langchain/nomic` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/nomic
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Ollama](/javascript/langchain-ollama)
  # @langchain/ollama

This package contains the LangChain.js integrations for Ollama via the `ollama` TypeScript SDK.

## Installation

```bash npm2yarn
npm install @langchain/ollama @langchain/core
```

TODO: add setup instructions for Ollama locally

## Chat Models

```typescript
import { ChatOllama } from "@langchain/ollama";

const model = new ChatOllama({
  model: "llama3", // Default value.
});

const result = await model.invoke(["human", "Hello, how are you?"]);
```

## Development

To develop the `@langchain/ollama` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/ollama
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [OpenAI](/javascript/langchain-openai)
  # @langchain/openai

This package contains the LangChain.js integrations for OpenAI through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/openai @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/openai": "^0.0.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `pnpm`, `npm`, and `yarn` to maximize compatibility.

## Chat Models

This package contains the `ChatOpenAI` class, which is the recommended way to interface with the OpenAI series of models.

To use, install the requirements, and configure your environment.

```bash
export OPENAI_API_KEY=your-api-key
```

Then initialize

```typescript
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4-1106-preview",
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```

### Streaming

```typescript
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4-1106-preview",
});
const response = await model.stream(new HumanMessage("Hello world!"));
```

## Tools

This package provides LangChain-compatible wrappers for OpenAI's built-in tools for the Responses API.

### Web Search Tool

The web search tool allows OpenAI models to search the web for up-to-date information before generating a response. Web search supports three main types:

1. **Non-reasoning web search**: Quick lookups where the model passes queries directly to the search tool
2. **Agentic search with reasoning models**: The model actively manages the search process, analyzing results and deciding whether to keep searching
3. **Deep research**: Extended investigations using models like `o3-deep-research` or `gpt-5` with high reasoning effort

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
});

// Basic usage
const response = await model.invoke(
  "What was a positive news story from today?",
  {
    tools: [tools.webSearch()],
  }
);
```

**Domain filtering** - Limit search results to specific domains (up to 100):

```typescript
const response = await model.invoke("Latest AI research news", {
  tools: [
    tools.webSearch({
      filters: {
        allowedDomains: ["arxiv.org", "nature.com", "science.org"],
      },
    }),
  ],
});
```

**User location** - Refine search results based on geography:

```typescript
const response = await model.invoke("What are the best restaurants near me?", {
  tools: [
    tools.webSearch({
      userLocation: {
        type: "approximate",
        country: "US",
        city: "San Francisco",
        region: "California",
        timezone: "America/Los_Angeles",
      },
    }),
  ],
});
```

**Cache-only mode** - Disable live internet access:

```typescript
const response = await model.invoke("Find information about OpenAI", {
  tools: [
    tools.webSearch({
      externalWebAccess: false,
    }),
  ],
});
```

For more information, see [OpenAI's Web Search Documentation](https://platform.openai.com/docs/guides/tools-web-search).

### MCP Tool (Model Context Protocol)

The MCP tool allows OpenAI models to connect to remote MCP servers and OpenAI-maintained service connectors, giving models access to external tools and services.

There are two ways to use MCP tools:

1. **Remote MCP servers**: Connect to any public MCP server via URL
2. **Connectors**: Use OpenAI-maintained wrappers for popular services like Google Workspace or Dropbox

**Remote MCP server** - Connect to any MCP-compatible server:

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

const response = await model.invoke("Roll 2d4+1", {
  tools: [
    tools.mcp({
      serverLabel: "dmcp",
      serverDescription: "A D&D MCP server for dice rolling",
      serverUrl: "https://dmcp-server.deno.dev/sse",
      requireApproval: "never",
    }),
  ],
});
```

**Service connectors** - Use OpenAI-maintained connectors for popular services:

```typescript
const response = await model.invoke("What's on my calendar today?", {
  tools: [
    tools.mcp({
      serverLabel: "google_calendar",
      connectorId: "connector_googlecalendar",
      authorization: "<oauth-access-token>",
      requireApproval: "never",
    }),
  ],
});
```

For more information, see [OpenAI's MCP Documentation](https://platform.openai.com/docs/guides/tools-remote-mcp).

### Code Interpreter Tool

The Code Interpreter tool allows models to write and run Python code in a sandboxed environment to solve complex problems.

Use Code Interpreter for:

- **Data analysis**: Processing files with diverse data and formatting
- **File generation**: Creating files with data and images of graphs
- **Iterative coding**: Writing and running code iteratively to solve problems
- **Visual intelligence**: Cropping, zooming, rotating, and transforming images

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4.1" });

// Basic usage with auto container (default 1GB memory)
const response = await model.invoke("Solve the equation 3x + 11 = 14", {
  tools: [tools.codeInterpreter()],
});
```

**Memory configuration** - Choose from 1GB (default), 4GB, 16GB, or 64GB:

```typescript
const response = await model.invoke(
  "Analyze this large dataset and create visualizations",
  {
    tools: [
      tools.codeInterpreter({
        container: { memoryLimit: "4g" },
      }),
    ],
  }
);
```

**With files** - Make uploaded files available to the code:

```typescript
const response = await model.invoke("Process the uploaded CSV file", {
  tools: [
    tools.codeInterpreter({
      container: {
        memoryLimit: "4g",
        fileIds: ["file-abc123", "file-def456"],
      },
    }),
  ],
});
```

**Explicit container** - Use a pre-created container ID:

```typescript
const response = await model.invoke("Continue working with the data", {
  tools: [
    tools.codeInterpreter({
      container: "cntr_abc123",
    }),
  ],
});
```

> **Note**: Containers expire after 20 minutes of inactivity. While called "Code Interpreter", the model knows it as the "python tool" - for explicit prompting, ask for "the python tool" in your prompts.

For more information, see [OpenAI's Code Interpreter Documentation](https://platform.openai.com/docs/guides/tools-code-interpreter).

### File Search Tool

The File Search tool allows models to search your files for relevant information using semantic and keyword search. It enables retrieval from a knowledge base of previously uploaded files stored in vector stores.

**Prerequisites**: Before using File Search, you must:

1. Upload files to the File API with `purpose: "assistants"`
2. Create a vector store
3. Add files to the vector store

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4.1" });

const response = await model.invoke("What is deep research by OpenAI?", {
  tools: [
    tools.fileSearch({
      vectorStoreIds: ["vs_abc123"],
      // maxNumResults: 5, // Limit results for lower latency
      // filters: { type: "eq", key: "category", value: "blog" }, // Metadata filtering
      // filters: { type: "and", filters: [                       // Compound filters (AND/OR)
      //   { type: "eq", key: "category", value: "technical" },
      //   { type: "gte", key: "year", value: 2024 },
      // ]},
      // rankingOptions: { scoreThreshold: 0.8, ranker: "auto" }, // Customize scoring
    }),
  ],
});
```

Filter operators: `eq` (equals), `ne` (not equal), `gt` (greater than), `gte` (greater than or equal), `lt` (less than), `lte` (less than or equal).

For more information, see [OpenAI's File Search Documentation](https://platform.openai.com/docs/guides/tools-file-search).

### Image Generation Tool

The Image Generation tool allows models to generate or edit images using text prompts and optional image inputs. It leverages the GPT Image model and automatically optimizes text inputs for improved performance.

Use Image Generation for:

- **Creating images from text**: Generate images from detailed text descriptions
- **Editing existing images**: Modify images based on text instructions
- **Multi-turn image editing**: Iteratively refine images across conversation turns
- **Various output formats**: Support for PNG, JPEG, and WebP formats

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

// Basic usage - generate an image
const response = await model.invoke(
  "Generate an image of a gray tabby cat hugging an otter with an orange scarf",
  { tools: [tools.imageGeneration()] }
);

// Access the generated image (base64-encoded)
const imageOutput = response.additional_kwargs.tool_outputs?.find(
  (output) => output.type === "image_generation_call"
);
if (imageOutput?.result) {
  const fs = await import("fs");
  fs.writeFileSync("output.png", Buffer.from(imageOutput.result, "base64"));
}
```

**Custom size and quality** - Configure output dimensions and quality:

```typescript
const response = await model.invoke("Draw a beautiful sunset over mountains", {
  tools: [
    tools.imageGeneration({
      size: "1536x1024", // Landscape format (also: "1024x1024", "1024x1536", "auto")
      quality: "high", // Quality level (also: "low", "medium", "auto")
    }),
  ],
});
```

**Output format and compression** - Choose format and compression level:

```typescript
const response = await model.invoke("Create a product photo", {
  tools: [
    tools.imageGeneration({
      outputFormat: "jpeg", // Format (also: "png", "webp")
      outputCompression: 90, // Compression 0-100 (for JPEG/WebP)
    }),
  ],
});
```

**Transparent background** - Generate images with transparency:

```typescript
const response = await model.invoke(
  "Create a logo with transparent background",
  {
    tools: [
      tools.imageGeneration({
        background: "transparent", // Background type (also: "opaque", "auto")
        outputFormat: "png",
      }),
    ],
  }
);
```

**Streaming with partial images** - Get visual feedback during generation:

```typescript
const response = await model.invoke("Draw a detailed fantasy castle", {
  tools: [
    tools.imageGeneration({
      partialImages: 2, // Number of partial images (0-3)
    }),
  ],
});
```

**Force image generation** - Ensure the model uses the image generation tool:

```typescript
const response = await model.invoke("A serene lake at dawn", {
  tools: [tools.imageGeneration()],
  tool_choice: { type: "image_generation" },
});
```

**Multi-turn editing** - Refine images across conversation turns:

```typescript
// First turn: generate initial image
const response1 = await model.invoke("Draw a red car", {
  tools: [tools.imageGeneration()],
});

// Second turn: edit the image
const response2 = await model.invoke(
  [response1, new HumanMessage("Now change the car color to blue")],
  { tools: [tools.imageGeneration()] }
);
```

> **Prompting tips**: Use terms like "draw" or "edit" for best results. For combining images, say "edit the first image by adding this element" instead of "combine" or "merge".

Supported models: `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`

For more information, see [OpenAI's Image Generation Documentation](https://platform.openai.com/docs/guides/tools-image-generation).

### Computer Use Tool

The Computer Use tool allows models to control computer interfaces by simulating mouse clicks, keyboard input, scrolling, and more. It uses OpenAI's Computer-Using Agent (CUA) model to understand screenshots and suggest actions.

> **Beta**: Computer use is in beta. Use in sandboxed environments only and do not use for high-stakes or authenticated tasks. Always implement human-in-the-loop for important decisions.

**How it works**: The tool operates in a continuous loop:

1. Model sends computer actions (click, type, scroll, etc.)
2. Your code executes these actions in a controlled environment
3. You capture a screenshot of the result
4. Send the screenshot back to the model
5. Repeat until the task is complete

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";

const model = new ChatOpenAI({ model: "computer-use-preview" });

// With execute callback for automatic action handling
const computer = tools.computerUse({
  displayWidth: 1024,
  displayHeight: 768,
  environment: "browser",
  execute: async (action) => {
    if (action.type === "screenshot") {
      return captureScreenshot();
    }
    if (action.type === "click") {
      await page.mouse.click(action.x, action.y, { button: action.button });
      return captureScreenshot();
    }
    if (action.type === "type") {
      await page.keyboard.type(action.text);
      return captureScreenshot();
    }
    if (action.type === "scroll") {
      await page.mouse.move(action.x, action.y);
      await page.evaluate(
        `window.scrollBy(${action.scroll_x}, ${action.scroll_y})`
      );
      return captureScreenshot();
    }
    // Handle other actions...
    return captureScreenshot();
  },
});

const llmWithComputer = model.bindTools([computer]);
const response = await llmWithComputer.invoke(
  "Check the latest news on bing.com"
);
```

For more information, see [OpenAI's Computer Use Documentation](https://platform.openai.com/docs/guides/tools-computer-use).

### Local Shell Tool

The Local Shell tool allows models to run shell commands locally on a machine you provide. Commands are executed inside your own runtime—the API only returns the instructions.

> **Security Warning**: Running arbitrary shell commands can be dangerous. Always sandbox execution or add strict allow/deny-lists before forwarding commands to the system shell.
> **Note**: This tool is designed to work with [Codex CLI](https://github.com/openai/codex) and the `codex-mini-latest` model.

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
import { exec } from "child_process";
import { promisify } from "util";

const execAsync = promisify(exec);
const model = new ChatOpenAI({ model: "codex-mini-latest" });

// With execute callback for automatic command handling
const shell = tools.localShell({
  execute: async (action) => {
    const { command, env, working_directory, timeout_ms } = action;
    const result = await execAsync(command.join(" "), {
      cwd: working_directory ?? process.cwd(),
      env: { ...process.env, ...env },
      timeout: timeout_ms ?? undefined,
    });
    return result.stdout + result.stderr;
  },
});

const llmWithShell = model.bindTools([shell]);
const response = await llmWithShell.invoke(
  "List files in the current directory"
);
```

**Action properties**: The model returns actions with these properties:

- `command` - Array of argv tokens to execute
- `env` - Environment variables to set
- `working_directory` - Directory to run the command in
- `timeout_ms` - Suggested timeout (enforce your own limits)
- `user` - Optional user to run the command as

For more information, see [OpenAI's Local Shell Documentation](https://platform.openai.com/docs/guides/tools-local-shell).

### Shell Tool

The Shell tool allows models to run shell commands through your integration. Unlike Local Shell, this tool supports executing multiple commands concurrently and is designed for `gpt-5.1`.

> **Security Warning**: Running arbitrary shell commands can be dangerous. Always sandbox execution or add strict allow/deny-lists before forwarding commands to the system shell.

**Use cases**:

- **Automating filesystem or process diagnostics** – e.g., "find the largest PDF under ~/Documents"
- **Extending model capabilities** – Using built-in UNIX utilities, Python runtime, and other CLIs
- **Running multi-step build and test flows** – Chaining commands like `pip install` and `pytest`
- **Complex agentic coding workflows** – Using with `apply_patch` for file operations

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
import { exec } from "node:child_process/promises";

const model = new ChatOpenAI({ model: "gpt-5.1" });

// With execute callback for automatic command handling
const shellTool = tools.shell({
  execute: async (action) => {
    const outputs = await Promise.all(
      action.commands.map(async (cmd) => {
        try {
          const { stdout, stderr } = await exec(cmd, {
            timeout: action.timeout_ms ?? undefined,
          });
          return {
            stdout,
            stderr,
            outcome: { type: "exit" as const, exit_code: 0 },
          };
        } catch (error) {
          const timedOut = error.killed && error.signal === "SIGTERM";
          return {
            stdout: error.stdout ?? "",
            stderr: error.stderr ?? String(error),
            outcome: timedOut
              ? { type: "timeout" as const }
              : { type: "exit" as const, exit_code: error.code ?? 1 },
          };
        }
      })
    );
    return {
      output: outputs,
      maxOutputLength: action.max_output_length,
    };
  },
});

const llmWithShell = model.bindTools([shellTool]);
const response = await llmWithShell.invoke(
  "Find the largest PDF file in ~/Documents"
);
```

**Action properties**: The model returns actions with these properties:

- `commands` - Array of shell commands to execute (can run concurrently)
- `timeout_ms` - Optional timeout in milliseconds (enforce your own limits)
- `max_output_length` - Optional maximum characters to return per command

**Return format**: Your execute function should return a `ShellResult`:

```typescript
interface ShellResult {
  output: Array<{
    stdout: string;
    stderr: string;
    outcome: { type: "exit"; exit_code: number } | { type: "timeout" };
  }>;
  maxOutputLength?: number | null; // Pass back from action if provided
}
```

For more information, see [OpenAI's Shell Documentation](https://platform.openai.com/docs/guides/tools-shell).

### Apply Patch Tool

The Apply Patch tool allows models to propose structured diffs that your integration applies. This enables iterative, multi-step code editing workflows where the model can create, update, and delete files in your codebase.

**When to use**:

- **Multi-file refactors** – Rename symbols, extract helpers, or reorganize modules
- **Bug fixes** – Have the model both diagnose issues and emit precise patches
- **Tests & docs generation** – Create new test files, fixtures, and documentation
- **Migrations & mechanical edits** – Apply repetitive, structured updates

> **Security Warning**: Applying patches can modify files in your codebase. Always validate paths, implement backups, and consider sandboxing.
> **Note**: This tool is designed to work with `gpt-5.1` model.

```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
import { applyDiff } from "@openai/agents";
import * as fs from "fs/promises";

const model = new ChatOpenAI({ model: "gpt-5.1" });

// With execute callback for automatic patch handling
const patchTool = tools.applyPatch({
  execute: async (operation) => {
    if (operation.type === "create_file") {
      const content = applyDiff("", operation.diff, "create");
      await fs.writeFile(operation.path, content);
      return `Created ${operation.path}`;
    }
    if (operation.type === "update_file") {
      const current = await fs.readFile(operation.path, "utf-8");
      const newContent = applyDiff(current, operation.diff);
      await fs.writeFile(operation.path, newContent);
      return `Updated ${operation.path}`;
    }
    if (operation.type === "delete_file") {
      await fs.unlink(operation.path);
      return `Deleted ${operation.path}`;
    }
    return "Unknown operation type";
  },
});

const llmWithPatch = model.bindTools([patchTool]);
const response = await llmWithPatch.invoke(
  "Rename the fib() function to fibonacci() in lib/fib.py"
);
```

**Operation types**: The model returns operations with these properties:

- `create_file` – Create a new file at `path` with content from `diff`
- `update_file` – Modify an existing file at `path` using V4A diff format in `diff`
- `delete_file` – Remove a file at `path`

**Best practices**:

- **Path validation**: Prevent directory traversal and restrict edits to allowed directories
- **Backups**: Consider backing up files before applying patches
- **Error handling**: Return descriptive error messages so the model can recover
- **Atomicity**: Decide whether you want "all-or-nothing" semantics (rollback if any patch fails)

For more information, see [OpenAI's Apply Patch Documentation](https://platform.openai.com/docs/guides/tools-apply-patch).

## Embeddings

This package also adds support for OpenAI's embeddings model.

```typescript
import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings({
  apiKey: process.env.OPENAI_API_KEY,
});
const res = await embeddings.embedQuery("Hello world");
```

## Development

To develop the OpenAI package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter=@langchain/openai
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
pnpm test
pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [OpenRouter](/javascript/langchain-openrouter)
  # @langchain/openrouter

This package contains the LangChain.js integrations for [OpenRouter](https://openrouter.ai/).

## Installation

```bash npm2yarn
npm install @langchain/openrouter @langchain/core
```

This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/openrouter": "^0.0.1",
    "@langchain/core": "^1.0.0"
  },
  "resolutions": {
    "@langchain/core": "^1.0.0"
  },
  "overrides": {
    "@langchain/core": "^1.0.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^1.0.0"
    }
  }
}
```

The field you need depends on the package manager you're using, but we recommend adding a field for the common `pnpm`, `npm`, and `yarn` to maximize compatibility.

## Chat Models

This package contains the `ChatOpenRouter` class, which is the recommended way to interface with any model available on OpenRouter. Pass any OpenRouter model identifier (e.g. `"anthropic/claude-4-sonnet"`, `"openai/gpt-4o"`) as the `model` param.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export OPENROUTER_API_KEY=your-api-key
```

Then initialize

```typescript
import { ChatOpenRouter } from "@langchain/openrouter";

const model = new ChatOpenRouter({
  model: "openai/gpt-4o",
});
const response = await model.invoke([{ role: "user", content: "Hello world!" }]);
```

### Streaming

```typescript
import { ChatOpenRouter } from "@langchain/openrouter";

const model = new ChatOpenRouter({
  model: "openai/gpt-4o",
});
const stream = await model.stream([{ role: "user", content: "Hello world!" }]);
for await (const chunk of stream) {
  console.log(chunk.content);
}
```

### Tool Calling

```typescript
import { ChatOpenRouter } from "@langchain/openrouter";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const adder = tool(async ({ a, b }) => `${a + b}`, {
  name: "add",
  description: "Add two numbers",
  schema: z.object({ a: z.number(), b: z.number() }),
});

const model = new ChatOpenRouter({
  model: "openai/gpt-4o",
}).bindTools([adder]);

const response = await model.invoke("What is 2 + 3?");
```

### Structured Output

```typescript
import { ChatOpenRouter } from "@langchain/openrouter";
import { z } from "zod";

const model = new ChatOpenRouter({
  model: "openai/gpt-4o",
});

const structured = model.withStructuredOutput(
  z.object({
    answer: z.string(),
    confidence: z.number(),
  })
);

const response = await structured.invoke("What is the capital of France?");
```

### OpenRouter-Specific Features

OpenRouter supports model routing, provider preferences, and plugins:

```typescript
import { ChatOpenRouter } from "@langchain/openrouter";

const model = new ChatOpenRouter({
  model: "openai/gpt-4o",
  models: ["openai/gpt-4o", "anthropic/claude-4-sonnet"],
  route: "fallback",
  provider: {
    allow_fallbacks: true,
  },
  transforms: ["middle-out"],
});
```

## Development

To develop the `@langchain/openrouter` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/openrouter
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
pnpm test
pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Pinecone](/javascript/langchain-pinecone)
  # @langchain/pinecone

This package contains the LangChain.js integrations for Pinecone through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/pinecone @langchain/core @pinecone-database/pinecone
```

## Development

To develop the Pinecone package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/pinecone
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Qdrant](/javascript/langchain-qdrant)
  # @langchain/qdrant

This package contains the LangChain.js integration for the [Qdrant](https://qdrant.tech/) vector database.

## Installation

```bash
npm install @langchain/qdrant
```

## Development

To develop the this package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/qdrant
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Redis](/javascript/langchain-redis)
  # @langchain/redis

This package contains the LangChain.js integrations for Redis through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/redis @langchain/core
```

## Development

To develop the Redis package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/redis
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

#### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.

## Migration Guide: RedisVectorStore to FluentRedisVectorStore

The `FluentRedisVectorStore` is the recommended approach for new projects. It provides a more powerful and type-safe filtering API with support for complex metadata queries. This guide helps you migrate from the legacy `RedisVectorStore` to `FluentRedisVectorStore`.

### Key Differences

| Feature                            | RedisVectorStore                           | FluentRedisVectorStore                       |
|------------------------------------|--------------------------------------------|----------------------------------------------|
| **Metadata Schema Definition**     | `Record<string, CustomSchemaField>`        | `MetadataFieldSchema[]`                      |
| **Inferred Metadata Schema**       | No, only custom schema supported           | Yes, based on metadata when adding documents |
| **Pre-filter - Definition**        | String arrays or raw query strings         | Type-safe `FilterExpression` objects         |
| **Pre-filter - Nested conditions** | All filters joined by single AND condition | AND, OR, nesting supported                   |
| **Pre-filter - conditions types**  | Numeric, Tag and Text                      | Numeric, Tag, Text, Geo, Timestamp           |
| **Metadata Storage**               | JSON blob + optional indexed fields        | Individual indexed fields (no JSON blob)     |

### Step 1: Update Imports

**Before (RedisVectorStore):**
```typescript
import { RedisVectorStore } from "@langchain/redis";
```

**After (FluentRedisVectorStore):**
```typescript
import { FluentRedisVectorStore, Tag, Num, Text, Geo } from "@langchain/redis";
```

### Step 2: Convert Metadata Schema

The schema format has changed from an object-based to an array-based structure.

**Before (RedisVectorStore):**
```typescript
const customSchema = {
  userId: { type: SchemaFieldTypes.TAG, required: true },
  price: { type: SchemaFieldTypes.NUMERIC, SORTABLE: true },
  description: { type: SchemaFieldTypes.TEXT },
  location: { type: SchemaFieldTypes.GEO }
};
```

**After (FluentRedisVectorStore):**
```typescript
const customSchema = [
  { name: "userId", type: "tag" },
  { name: "price", type: "numeric", options: { sortable: true } },
  { name: "description", type: "text" },
  { name: "location", type: "geo" }
];
```

### Step 3: Update Configuration

**Before:**
```typescript
const vectorStore = await RedisVectorStore.fromDocuments(
  documents,
  embeddings,
  {
    redisClient: client,
    indexName: "products",
    customSchema: {
      category: { type: SchemaFieldTypes.TAG },
      price: { type: SchemaFieldTypes.NUMERIC, SORTABLE: true }
    }
  }
);
```

**After:**
```typescript
const vectorStore = await FluentRedisVectorStore.fromDocuments(
  documents,
  embeddings,
  {
    redisClient: client,
    indexName: "products",
    customSchema: [
      { name: "category", type: "tag" },
      { name: "price", type: "numeric", options: { sortable: true } }
    ]
  }
);
```

### Step 4: Update Search Queries with Filters

The filtering API has changed significantly. Instead of passing metadata objects or string arrays, you now use fluent filter expressions.

**Before (RedisVectorStore):**
```typescript
// Simple metadata filtering
const results = await vectorStore.similaritySearchVectorWithScoreAndMetadata(
  queryVector,
  5,
  { category: "electronics", price: { min: 100, max: 1000 } }
);

// Or with string-based filters
const results = await vectorStore.similaritySearchVectorWithScore(
  queryVector,
  5,
  ["electronics", "gadgets"]
);
```

**After (FluentRedisVectorStore):**
```typescript
// Custom filter expression with the fluent API
const results = await vectorStore.similaritySearchVectorWithScore(
  queryVector,
  5,
  Tag("category").eq("electronics").and(Num("price").between(100,1000)
  )
);

// Basic filter expression with the fluent API
const results = await vectorStore.similaritySearchVectorWithScore(
  queryVector,
  5,
  Tag("metadata").eq("electronics", "gadgets")
);
```

### Step 5: Database Schema Migration

The `FluentRedisVectorStore` only supports metadata stored in individual fields, alongside the vector data and content data. 
It is not compatible with the implementation of the RedisVectorStore which stores metadata as a JSON blob in a single field.
The custom schema option of the `RedisVectorStore` could be migrated to the `FluentRedisVectorStore` following the instructions in step 2.

To avoid ambiguous results, it's recommended to create a new index with the updated schema and migrate data.

### Step 6: Update Application Code

Replace all instances of `RedisVectorStore` with `FluentRedisVectorStore` and update filter usage:

**Before:**
```typescript
async function searchProducts(query: string, category?: string) {
  const results = await vectorStore.similaritySearchVectorWithScoreAndMetadata(
    await embeddings.embedQuery(query),
    5,
    category ? { category } : undefined
  );
  return results;
}
```

**After:**
```typescript
async function searchProducts(query: string, category?: string) {
  const filter = category ? Tag("category").eq(category) : undefined;
  const results = await vectorStore.similaritySearchVectorWithScore(
    await embeddings.embedQuery(query),
    5,
    filter
  );
  return results;
}
```
- [Tavily](/javascript/langchain-tavily)
  # `@langchain/tavily`

[![NPM - Version](https://img.shields.io/npm/v/@langchain/tavily?style=flat-square&label=%20)](https://www.npmjs.com/package/@langchain/tavily)

This package provides integrations for the [Tavily](https://tavily.com/) search engine within LangChain.js. Tavily is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.

This package exposes four tools:

- `TavilySearch`: Performs a search optimized for LLMs and RAG.
- `TavilyExtract`: Extracts raw content from a list of URLs.
- `TavilyCrawl`: Initiates a structured web crawl starting from a specified base URL.
- `TavilyMap`: Generates a site map starting from a specified base URL.
- `TavilyResearch`: Creates structured research tasks and optionally streams research output.
- `TavilyGetResearch`: Retrieves research results by `request_id` for previously created research tasks.

## Installation

```bash
npm install @langchain/tavily
```

## Setup

You need a Tavily API key to use these tools. You can get one [here](https://app.tavily.com). Set it as an environment variable:

```typescript
process.env.TAVILY_API_KEY = "YOUR_API_KEY";
```

## Usage

### TavilySearch

```typescript
import { TavilySearch } from "@langchain/tavily";

const tool = new TavilySearch({
  maxResults: 5,
  // You can set other constructor parameters here, e.g.:
  // topic: "general",
  // includeAnswer: false,
  // includeRawContent: false,
  // includeImages: false,
  // searchDepth: "basic",
});

// Invoke with a query
const results = await tool.invoke({
  query: "what is the current weather in SF?",
});

console.log(results);
```

### TavilyExtract

```typescript
import { TavilyExtract } from "@langchain/tavily";

const tool = new TavilyExtract({
  // Constructor parameters:
  // extractDepth: "basic",
  // includeImages: false,
});

// Invoke with a list of URLs
const results = await tool.invoke({
  urls: ["https://en.wikipedia.org/wiki/Lionel_Messi"],
});

console.log(results);
```

### TavilyResearch

```typescript
import { TavilyResearch } from "@langchain/tavily";

const tool = new TavilyResearch({
  // Optional constructor defaults:
  // model: "auto",
  // stream: false,
  // citationFormat: "numbered",
  // apiBaseUrl: "https://api.tavily.com",
});

// Invoke with a research task
const result = await tool.invoke({
  input: "Research the latest developments in AI",
  model: "mini",
  citationFormat: "apa",
});

console.log(result);
```

### TavilyGetResearch

```typescript
import { TavilyGetResearch } from "@langchain/tavily";

const tool = new TavilyGetResearch({
  // Optional constructor parameters:
  // apiBaseUrl: "https://api.tavily.com",
});

// Invoke with a request_id returned from TavilyResearch
const result = await tool.invoke({
  requestId: "your-request-id-here",
});

console.log(result);
```

### TavilyCrawl

```typescript
import { TavilyCrawl } from "@langchain/tavily";

const tool = new TavilyCrawl({
  // Constructor parameters:
  // extractDepth: "basic",
  // format: "markdown",
  // maxDepth: 3,
  // maxBreadth: 50,
  // limit: 100,
  // includeImages: false,
  // allowExternal: false,
});

// Invoke with a URL and optional parameters
const results = await tool.invoke({
  url: "https://docs.tavily.com/",
  instructions: "Find information about the LangChain integration.",
});

console.log(results);
```

### TavilyMap

```typescript
import { TavilyMap } from "@langchain/tavily";

const tool = new TavilyMap({
  // Constructor parameters:
  // maxDepth: 3,
  // maxBreadth: 50,
  // limit: 100,
  // allowExternal: false,
});

// Invoke with a URL and optional parameters
const results = await tool.invoke({
  url: "https://docs.tavily.com/",
});

console.log(results);
```

## Documentation

For more detailed information, check out the documentation pages:

- **TavilySearch**: [http://js.langchain.com/docs/integrations/tools/tavily_search/](http://js.langchain.com/docs/integrations/tools/tavily_search/)
- **TavilyExtract**: [http://js.langchain.com/docs/integrations/tools/tavily_extract/](http://js.langchain.com/docs/integrations/tools/tavily_extract/)
- **TavilyCrawl**: [http://js.langchain.com/docs/integrations/tools/tavily_crawl/](http://js.langchain.com/docs/integrations/tools/tavily_crawl/)
- **TavilyMap**: [http://js.langchain.com/docs/integrations/tools/tavily_map/](http://js.langchain.com/docs/integrations/tools/tavily_map/)

## License

This package is licensed under the MIT License. See the [LICENSE](https://github.com/langchain-ai/langchainjs/tree/2418c6f18771460d5a4da4e6c1e44e4adb5e1705/libs/providers/langchain-tavily/LICENSE) file for details.
- [Turbopuffer](/javascript/langchain-turbopuffer)
  # @langchain/turbopuffer

This package contains the LangChain.js integration for the [turbopuffer](https://turbopuffer.com/) vector database.

## Installation

```bash
npm install @langchain/turbopuffer @turbopuffer/turbopuffer
```

## Usage

```typescript
import { Turbopuffer } from "@turbopuffer/turbopuffer";
import { TurbopufferVectorStore } from "@langchain/turbopuffer";
import { OpenAIEmbeddings } from "@langchain/openai";

const client = new Turbopuffer({ apiKey: process.env.TURBOPUFFER_API_KEY });

const vectorStore = new TurbopufferVectorStore(new OpenAIEmbeddings(), {
  namespace: client.namespace("my-namespace"),
});

const ids = await vectorStore.addDocuments([
  { pageContent: "Hello world", metadata: { source: "greeting" } },
]);

const results = await vectorStore.similaritySearch("hello", 1);

await vectorStore.delete({ ids });
```

### Configuration

| Option           | Description                                   | Default             |
| ---------------- | --------------------------------------------- | ------------------- |
| `namespace`      | A configured turbopuffer `Namespace` instance | Required            |
| `distanceMetric` | `"cosine_distance"` or `"euclidean_squared"`  | `"cosine_distance"` |

### Add Options

| Option      | Description              | Default              |
| ----------- | ------------------------ | -------------------- |
| `ids`       | Custom IDs for documents | Auto-generated UUIDs |
| `batchSize` | Batch size for upserts   | `3000`               |

### Filtering

```typescript
const results = await vectorStore.similaritySearch("query", 10, [
  "category",
  "Eq",
  "books",
]);
```

## Development

```bash
pnpm install
pnpm build
pnpm test
pnpm test:int
pnpm lint && pnpm format
```
- [Weaviate](/javascript/langchain-weaviate)
  # @langchain/weaviate

This package contains the LangChain.js integrations for Weaviate with the `weaviate-client` SDK.

## Installation

```bash npm2yarn
npm install @langchain/weaviate @langchain/core
```

## Vectorstore

This package adds support for Weaviate vectorstore.

To follow along with this example install the `@langchain/openai` package for their Embeddings model.

```bash
npm install @langchain/openai
```

Now set the necessary environment variables (or pass them in via the client object):

```bash
export WEAVIATE_SCHEME=
export WEAVIATE_HOST=
export WEAVIATE_API_KEY=
```

```typescript
import weaviate, { ApiKey } from "weaviate-client";
import { WeaviateStore } from "@langchain/weaviate";

// Weaviate SDK has a TypeScript issue so we must do this.
const client = (weaviate as any).client({
  scheme: process.env.WEAVIATE_SCHEME || "https",
  host: process.env.WEAVIATE_HOST || "localhost",
  apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"),
});

// Create a store and fill it with some texts + metadata
await WeaviateStore.fromTexts(
  ["hello world", "hi there", "how are you", "bye now"],
  [{ foo: "bar" }, { foo: "baz" }, { foo: "qux" }, { foo: "bar" }],
  new OpenAIEmbeddings(),
  {
    client,
    indexName: "Test",
    textKey: "text",
    metadataKeys: ["foo"],
  }
);
```

## Development

To develop the `@langchain/weaviate` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/weaviate
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [xAI](/javascript/langchain-xai)
  # @langchain/xai

This package contains the LangChain.js integrations for xAI.

## Installation

```bash npm2yarn
npm install @langchain/xai @langchain/core
```

## Chat models

This package adds support for xAI chat model inference.

Set the necessary environment variable (or pass it in via the constructor):

```bash
export XAI_API_KEY=
```

```typescript
import { ChatXAI } from "@langchain/xai";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatXAI({
  apiKey: process.env.XAI_API_KEY, // Default value.
});

const message = new HumanMessage("What color is the sky?");

const res = await model.invoke([message]);
```

## Server Tool Calling (Live Search)

xAI supports server-side tools that are executed by the API rather than requiring client-side execution. The `live_search` tool enables the model to search the web for real-time information.

### Using the built-in live_search tool

```typescript
import { ChatXAI, tools } from "@langchain/xai";

const model = new ChatXAI({
  model: "grok-3-fast",
});

// Create the built-in live_search tool with optional parameters
const searchTool = tools.xaiLiveSearch({
  maxSearchResults: 5,
  returnCitations: true,
});

// Bind the live_search tool to the model
const modelWithSearch = model.bindTools([searchTool]);

// The model will search the web for real-time information
const result = await modelWithSearch.invoke(
  "What happened in tech news today?"
);
console.log(result.content);
```

### Using searchParameters for more control

```typescript
import { ChatXAI } from "@langchain/xai";

const model = new ChatXAI({
  model: "grok-3-fast",
  searchParameters: {
    mode: "auto", // "auto" | "on" | "off"
    max_search_results: 5,
    from_date: "2024-01-01", // ISO date string
    return_citations: true,
  },
});

const result = await model.invoke("What are the latest AI developments?");
```

### Override search parameters per request

```typescript
const result = await model.invoke("Find recent news about SpaceX", {
  searchParameters: {
    mode: "on",
    max_search_results: 10,
    sources: [
      {
        type: "web",
        allowed_websites: ["spacex.com", "nasa.gov"],
      },
    ],
  },
});
```

### Configuring data sources with `sources`

You can configure which data sources Live Search should use via the `sources` field
in `searchParameters`. Each entry corresponds to one of the sources described in the
official xAI Live Search docs (`web`, `news`, `x`, `rss`).

```typescript
const result = await model.invoke(
  "What are the latest updates from xAI and related news?",
  {
    searchParameters: {
      mode: "on",
      sources: [
        {
          type: "web",
          // Only search on these websites
          allowed_websites: ["x.ai"],
        },
        {
          type: "news",
          // Exclude specific news websites
          excluded_websites: ["bbc.co.uk"],
        },
        {
          type: "x",
          // Focus on specific X handles
          included_x_handles: ["xai"],
        },
      ],
    },
  }
);
```

You can also use RSS feeds as a data source:

```typescript
const result = await model.invoke("Summarize the latest posts from this feed", {
  searchParameters: {
    mode: "on",
    sources: [
      {
        type: "rss",
        links: ["https://example.com/feed.rss"],
      },
    ],
  },
});
```

> Notes:
>
> - The `xaiLiveSearch` tool options use **camelCase** field names in TypeScript
>   (for example `maxSearchResults`, `fromDate`, `returnCitations`,
>   `allowedWebsites`, `excludedWebsites`, `includedXHandles`). These are
>   automatically mapped to the underlying JSON API's `search_parameters`
>   object, which uses `snake_case` field names as documented in the official
>   xAI Live Search docs.

### Combining live_search with custom tools

```typescript
import { ChatXAI, tools } from "@langchain/xai";

const model = new ChatXAI({ model: "grok-3-fast" });

const modelWithTools = model.bindTools([
  tools.xaiLiveSearch(), // Built-in server tool
  {
    // Custom function tool
    type: "function",
    function: {
      name: "get_stock_price",
      description: "Get the current stock price",
      parameters: {
        type: "object",
        properties: {
          symbol: { type: "string" },
        },
        required: ["symbol"],
      },
    },
  },
]);
```

## Development

To develop the `@langchain/xai` package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/xai
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [Yandex](/javascript/langchain-yandex)
  # @langchain/yandex

This package contains the LangChain.js integrations for YandexGPT through their [Foundation Models REST API](https://cloud.yandex.ru/en/docs/yandexgpt/api-ref/v1/).

## Installation

```bash npm2yarn
npm install @langchain/yandex @langchain/core
```

## Setup your environment

First, you should [create a service account](https://cloud.yandex.com/en/docs/iam/operations/sa/create) with the `ai.languageModels.user` role.

Next, you have two authentication options:

- [IAM token](https://cloud.yandex.com/en/docs/iam/operations/iam-token/create-for-sa).
  You can specify the token in a constructor parameter as `iam_token` or in an environment variable `YC_IAM_TOKEN`.
- [API key](https://cloud.yandex.com/en/docs/iam/operations/api-key/create)
  You can specify the key in a constructor parameter as `api_key` or in an environment variable `YC_API_KEY`.

## Chat Models and LLM Models

This package contains the `ChatYandexGPT` and `YandexGPT` classes for working with the YandexGPT series of models.

To specify the model you can use `model_uri` parameter, see [the documentation](https://cloud.yandex.com/en/docs/yandexgpt/concepts/models#yandexgpt-generation) for more details.

By default, the latest version of `yandexgpt-lite` is used from the folder specified in the parameter `folder_id` or `YC_FOLDER_ID` environment variable.

### Examples

```typescript
import { ChatYandexGPT } from "@langchain/yandex";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

const chat = new ChatYandexGPT();
const response = await chat.invoke([
  new SystemMessage(
    "You are a helpful assistant that translates English to French."
  ),
  new HumanMessage("I love programming."),
]);
```

```typescript
import { YandexGPT } from "@langchain/yandex";
const model = new YandexGPT();
const res = await model.invoke([`Translate "I love programming" into French.`]);
```

## Embeddings

This package also adds support for YandexGPT embeddings models.

To specify the model you can use `model_uri` parameter, see [the documentation](https://cloud.yandex.com/en/docs/yandexgpt/concepts/models#yandexgpt-embeddings) for more details.

By default, the latest version of `text-search-query` embeddings model is used from the folder specified in the parameter `folder_id` or `YC_FOLDER_ID` environment variable.

### Example

```typescript
import { YandexGPTEmbeddings } from "@langchain/yandex";

const model = new YandexGPTEmbeddings({});

/* Embed queries */
const res = await model.embedQuery("This is a test document.");
/* Embed documents */
const documentRes = await model.embedDocuments(["This is a test document."]);
```

## Development

To develop the yandex package, you'll need to follow these instructions:

### Install dependencies

```bash
pnpm install
```

### Build the package

```bash
pnpm build
```

Or from the repo root:

```bash
pnpm build --filter @langchain/yandex
```

### Run tests

Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
end in `.int.test.ts`:

```bash
$ pnpm test:int
```

### Lint & Format

Run the linter & formatter to ensure your code is up to standard:

```bash
pnpm lint && pnpm format
```

### Adding new entrypoints

If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `exports` field in the `package.json` file and run `pnpm build` to generate the new entrypoint.
- [LangSmith](/javascript/langsmith)
  # LangSmith Client SDK

![NPM Version](https://img.shields.io/npm/v/langsmith?logo=npm)
[![JS Downloads](https://img.shields.io/npm/dm/langsmith)](https://www.npmjs.com/package/langsmith)

This package contains the TypeScript client for interacting with the [LangSmith platform](https://smith.langchain.com/).

To install:

```bash
yarn add langsmith
```

LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM Application and provides seamless integration with [LangChain](https://github.com/hwchase17/langchainjs), a widely recognized open-source framework that simplifies the process for developers to create powerful language model applications.

> **Note**: You can enjoy the benefits of LangSmith without using the LangChain open-source packages! To get started with your own proprietary framework, set up your account and then skip to [Logging Traces Outside LangChain](#logging-traces-outside-langchain).

> **Cookbook:** For tutorials on how to get more value out of LangSmith, check out the [Langsmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook/tree/main) repo.

A typical workflow looks like:

1. Set up an account with LangSmith.
2. Log traces.
3. Debug, Create Datasets, and Evaluate Runs.

We'll walk through these steps in more detail below.

## 1. Connect to LangSmith

Sign up for [LangSmith](https://smith.langchain.com/) using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.

Then, create a unique API key on the [Settings Page](https://smith.langchain.com/settings).

> [!NOTE]
> Save the API Key in a secure location. It will not be shown again.

## 2. Log Traces

You can log traces natively in your LangChain application or using a LangSmith RunTree.

### Logging Traces with LangChain

LangSmith seamlessly integrates with the JavaScript LangChain library to record traces from your LLM applications.

```bash
yarn add langchain
```

1. **Copy the environment variables from the Settings Page and add them to your application.**

Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.

```typescript
process.env.LANGSMITH_TRACING = "true";
process.env.LANGSMITH_ENDPOINT = "https://api.smith.langchain.com";
// process.env.LANGSMITH_ENDPOINT = "https://eu.api.smith.langchain.com"; // If signed up in the EU region
process.env.LANGSMITH_API_KEY = "<YOUR-LANGSMITH-API-KEY>";
// process.env.LANGSMITH_PROJECT = "My Project Name"; // Optional: "default" is used if not set
// process.env.LANGSMITH_WORKSPACE_ID = "<YOUR-WORKSPACE-ID>"; // Required for org-scoped API keys
```

> **Tip:** Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to `default`.

2. **Run an Agent, Chain, or Language Model in LangChain**

If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.

```typescript
import { ChatOpenAI } from "langchain/chat_models/openai";

const chat = new ChatOpenAI({ temperature: 0 });
const response = await chat.predict(
  "Translate this sentence from English to French. I love programming."
);
console.log(response);
```

### Logging Traces Outside LangChain

You can still use the LangSmith development platform without depending on any
LangChain code. You can connect either by setting the appropriate environment variables,
or by directly specifying the connection information in the RunTree.

1. **Copy the environment variables from the Settings Page and add them to your application.**

```shell
export LANGSMITH_TRACING="true";
export LANGSMITH_API_KEY=<YOUR-LANGSMITH-API-KEY>
# export LANGSMITH_PROJECT="My Project Name" #  Optional: "default" is used if not set
# export LANGSMITH_ENDPOINT=https://api.smith.langchain.com # or your own server
```

## Integrations

Langsmith's `traceable` wrapper function makes it easy to trace any function or LLM call in your own favorite framework. Below are some examples.

### OpenAI SDK

<!-- markdown-link-check-disable -->

The easiest way to trace calls from the [OpenAI SDK](https://platform.openai.com/docs/api-reference) with LangSmith
is using the `wrapOpenAI` wrapper function available in LangSmith 0.1.3 and up.

In order to use, you first need to set your LangSmith API key:

```shell
export LANGSMITH_TRACING="true";
export LANGSMITH_API_KEY=<your-api-key>
```

Next, you will need to install the LangSmith SDK and the OpenAI SDK:

```shell
npm install langsmith openai
```

After that, initialize your OpenAI client and wrap the client with `wrapOpenAI` method to enable tracing for the completions and chat completions methods:

```ts
import { OpenAI } from "openai";
import { wrapOpenAI } from "langsmith/wrappers";

const openai = wrapOpenAI(new OpenAI());

await openai.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [{ content: "Hi there!", role: "user" }],
});
```

Alternatively, you can use the `traceable` function to wrap the client methods you want to use:

```ts
import { traceable } from "langsmith/traceable";

const openai = new OpenAI();

const createCompletion = traceable(
  openai.chat.completions.create.bind(openai.chat.completions),
  { name: "OpenAI Chat Completion", run_type: "llm" }
);

await createCompletion({
  model: "gpt-3.5-turbo",
  messages: [{ content: "Hi there!", role: "user" }],
});
```

Note the use of `.bind` to preserve the function's context. The `run_type` field in the
extra config object marks the function as an LLM call, and enables token usage tracking
for OpenAI.

Oftentimes, you use the OpenAI client inside of other functions or as part of a longer
sequence. You can automatically get nested traces by using this wrapped method
within other functions wrapped with `traceable`.

```ts
const nestedTrace = traceable(async (text: string) => {
  const completion = await openai.chat.completions.create({
    model: "gpt-3.5-turbo",
    messages: [{ content: text, role: "user" }],
  });
  return completion;
});

await nestedTrace("Why is the sky blue?");
```

```
{
  "id": "chatcmpl-8sPToJQLLVepJvyeTfzZMOMVIKjMo",
  "object": "chat.completion",
  "created": 1707978348,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The sky appears blue because of a phenomenon known as Rayleigh scattering. The Earth's atmosphere is composed of tiny molecules, such as nitrogen and oxygen, which are much smaller than the wavelength of visible light. When sunlight interacts with these molecules, it gets scattered in all directions. However, shorter wavelengths of light (blue and violet) are scattered more compared to longer wavelengths (red, orange, and yellow). \n\nAs a result, when sunlight passes through the Earth's atmosphere, the blue and violet wavelengths are scattered in all directions, making the sky appear blue. This scattering of shorter wavelengths is also responsible for the vibrant colors observed during sunrise and sunset, when the sunlight has to pass through a thicker portion of the atmosphere, causing the longer wavelengths to dominate the scattered light."
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 154,
    "total_tokens": 167
  },
  "system_fingerprint": null
}
```

:::tip
[Click here](https://smith.langchain.com/public/4af46ef6-b065-46dc-9cf0-70f1274edb01/r) to see an example LangSmith trace of the above.
:::

## Next.js

You can use the `traceable` wrapper function in Next.js apps to wrap arbitrary functions much like in the example above.

One neat trick you can use for Next.js and other similar server frameworks is to wrap the entire exported handler for a route
to group traces for the any sub-runs. Here's an example:

```ts
import { NextRequest, NextResponse } from "next/server";

import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";

export const runtime = "edge";

const handler = traceable(
  async function () {
    const openai = wrapOpenAI(new OpenAI());

    const completion = await openai.chat.completions.create({
      model: "gpt-3.5-turbo",
      messages: [{ content: "Why is the sky blue?", role: "user" }],
    });

    const response1 = completion.choices[0].message.content;

    const completion2 = await openai.chat.completions.create({
      model: "gpt-3.5-turbo",
      messages: [
        { content: "Why is the sky blue?", role: "user" },
        { content: response1, role: "assistant" },
        { content: "Cool thank you!", role: "user" },
      ],
    });

    const response2 = completion2.choices[0].message.content;

    return {
      text: response2,
    };
  },
  {
    name: "Simple Next.js handler",
  }
);

export async function POST(req: NextRequest) {
  const result = await handler();
  return NextResponse.json(result);
}
```

The two OpenAI calls within the handler will be traced with appropriate inputs, outputs,
and token usage information.

:::tip
[Click here](https://smith.langchain.com/public/faaf26ad-8c59-4622-bcfe-b7d896733ca6/r) to see an example LangSmith trace of the above.
:::

## Vercel AI SDK

The [Vercel AI SDK](https://sdk.vercel.ai/docs) contains integrations with a variety of model providers.
Here's an example of how you can trace outputs in a Next.js handler:

```ts
import { traceable } from "langsmith/traceable";
import { OpenAIStream, StreamingTextResponse } from "ai";

// Note: There are no types for the Mistral API client yet.
import MistralClient from "@mistralai/mistralai";

const client = new MistralClient(process.env.MISTRAL_API_KEY || "");

export async function POST(req: Request) {
  // Extract the `messages` from the body of the request
  const { messages } = await req.json();

  const mistralChatStream = traceable(client.chatStream.bind(client), {
    name: "Mistral Stream",
    run_type: "llm",
  });

  const response = await mistralChatStream({
    model: "mistral-tiny",
    maxTokens: 1000,
    messages,
  });

  // Convert the response into a friendly text-stream. The Mistral client responses are
  // compatible with the Vercel AI SDK OpenAIStream adapter.
  const stream = OpenAIStream(response as any);

  // Respond with the stream
  return new StreamingTextResponse(stream);
}
```

See the [AI SDK docs](https://sdk.vercel.ai/docs) for more examples.

## Arbitrary SDKs

You can use the generic `wrapSDK` method to add tracing for arbitrary SDKs.

Do note that this will trace ALL methods in the SDK, not just chat completion endpoints.
If the SDK you are wrapping has other methods, we recommend using it for only LLM calls.

Here's an example using the Anthropic SDK:

```ts
import { wrapSDK } from "langsmith/wrappers";
import { Anthropic } from "@anthropic-ai/sdk";

const originalSDK = new Anthropic();
const sdkWithTracing = wrapSDK(originalSDK);

const response = await sdkWithTracing.messages.create({
  messages: [
    {
      role: "user",
      content: `What is 1 + 1? Respond only with "2" and nothing else.`,
    },
  ],
  model: "claude-3-sonnet-20240229",
  max_tokens: 1024,
});
```

:::tip
[Click here](https://smith.langchain.com/public/0e7248af-bbed-47cf-be9f-5967fea1dec1/r) to see an example LangSmith trace of the above.
:::

#### Alternatives: **Log traces using a RunTree.**

A RunTree tracks your application. Each RunTree object is required to have a name and run_type. These and other important attributes are as follows:

- `name`: `string` - used to identify the component's purpose
- `run_type`: `string` - Currently one of "llm", "chain" or "tool"; more options will be added in the future
- `inputs`: `Record<string, any>` - the inputs to the component
- `outputs`: `Optional<Record<string, any>>` - the (optional) returned values from the component
- `error`: `Optional<string>` - Any error messages that may have arisen during the call

```typescript
import { RunTree, RunTreeConfig } from "langsmith";

const parentRunConfig: RunTreeConfig = {
  name: "My Chat Bot",
  run_type: "chain",
  inputs: {
    text: "Summarize this morning's meetings.",
  },
  serialized: {}, // Serialized representation of this chain
  // project_name: "Defaults to the LANGSMITH_PROJECT env var"
  // apiUrl: "Defaults to the LANGSMITH_ENDPOINT env var"
  // apiKey: "Defaults to the LANGSMITH_API_KEY env var"
};

const parentRun = new RunTree(parentRunConfig);

await parentRun.postRun();

const childLlmRun = await parentRun.createChild({
  name: "My Proprietary LLM",
  run_type: "llm",
  inputs: {
    prompts: [
      "You are an AI Assistant. The time is XYZ." +
        " Summarize this morning's meetings.",
    ],
  },
});

await childLlmRun.postRun();

await childLlmRun.end({
  outputs: {
    generations: [
      "I should use the transcript_loader tool" +
        " to fetch meeting_transcripts from XYZ",
    ],
  },
});

await childLlmRun.patchRun();

const childToolRun = await parentRun.createChild({
  name: "transcript_loader",
  run_type: "tool",
  inputs: {
    date: "XYZ",
    content_type: "meeting_transcripts",
  },
});
await childToolRun.postRun();

await childToolRun.end({
  outputs: {
    meetings: ["Meeting1 notes.."],
  },
});

await childToolRun.patchRun();

const childChainRun = await parentRun.createChild({
  name: "Unreliable Component",
  run_type: "tool",
  inputs: {
    input: "Summarize these notes...",
  },
});

await childChainRun.postRun();

try {
  // .... the component does work
  throw new Error("Something went wrong");
} catch (e) {
  await childChainRun.end({
    error: `I errored again ${e.message}`,
  });
  await childChainRun.patchRun();
  throw e;
}

await childChainRun.patchRun();

await parentRun.end({
  outputs: {
    output: ["The meeting notes are as follows:..."],
  },
});

// False directs to not exclude child runs
await parentRun.patchRun();
```

## Evaluation

#### Create a Dataset from Existing Runs

Once your runs are stored in LangSmith, you can convert them into a dataset.
For this example, we will do so using the Client, but you can also do this using
the web interface, as explained in the [LangSmith docs](https://docs.smith.langchain.com/docs/).

```typescript
import { Client } from "langsmith/client";
const client = new Client({
  // apiUrl: "https://api.langchain.com", // Defaults to the LANGSMITH_ENDPOINT env var
  // apiKey: "my_api_key", // Defaults to the LANGSMITH_API_KEY env var
  /* callerOptions: {
         maxConcurrency?: Infinity; // Maximum number of concurrent requests to make
         maxRetries?: 6; // Maximum number of retries to make
    */
});
const datasetName = "Example Dataset";
// We will only use examples from the top level AgentExecutor run here,
// and exclude runs that errored.
const runs = await client.listRuns({
  projectName: "my_project",
  executionOrder: 1,
  error: false,
});

const dataset = await client.createDataset(datasetName, {
  description: "An example dataset",
});

for (const run of runs) {
  await client.createExample(run.inputs, run.outputs ?? {}, {
    datasetId: dataset.id,
  });
}
```

## Additional Documentation

To learn more about the LangSmith platform, check out the [docs](https://docs.smith.langchain.com/docs/).
