LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
Documentation: To learn more about LangChain, check out the docs.
If you're looking for more advanced customization or agent orchestration, check out LangGraph.js. our framework for building agents and controllable workflows.
[!NOTE] Looking for the Python version? Check out LangChain.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
You can use npm, pnpm, or yarn to install LangChain.js
npm install -S langchain
# or
pnpm install langchain
# or
yarn add langchain
LangChain helps developers build applications powered by LLMs through a standard interface for agents, models, embeddings, vector stores, and more.
Use LangChain for:
LangChain.js is written in TypeScript and can be used in:
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see here.
Please report any security issues or concerns following our security guidelines.
LangChain Messages
Represents a chunk of an AI message, which can be concatenated with
Base class for all types of messages in a conversation. It includes
Represents a chunk of a message, which can be concatenated with other
Strategy for clearing tool outputs when token limits are exceeded.
Interface for interacting with a document.
A tool that can be created dynamically from a function, name, and
A tool that can be created dynamically from a function, name, and description.
Fake chat model for testing tool calling functionality
Represents a human message in a conversation.
Represents a chunk of a human message, which can be concatenated with
In-memory implementation of the BaseStore using a dictionary. Used for
Error thrown when a middleware fails.
Raised when model returns multiple structured output tool calls when only one is expected.
Error thrown when PII is detected and strategy is 'block'
Raised when structured output tool call arguments fail to parse according to the schema.
Base class for Tools that accept input of any shape defined by a Zod schema.
Represents a system message in a conversation.
Represents a chunk of a system message, which can be concatenated with
Base class for Tools that accept input as a string.
Exception raised when tool call limits are exceeded.
Raised when a tool call is throwing an error.
Represents a tool message in a conversation.
Represents a chunk of a tool message, which can be concatenated
Information for tracking structured output tool metadata.
Class that provides a layer of abstraction over the base storage,
File system implementation of the BaseStore using a dictionary. Used for
In-memory implementation of the BaseStore using a dictionary. Used for
Attempts to infer the model provider based on the given model name.
Helper function to get a chat model class by its class name or model provider.
Initialize a ChatModel from the model name and provider.
Pull a prompt from the hub.
Push a prompt to the hub.
Infer modelProvider from the id namespace to avoid className collisions.
Pull a prompt from the hub.
Push a prompt to the hub.
Creates a prompt caching middleware for Anthropic models to optimize API usage.
Apply strategy to content based on matches
LangChain utilities
Middleware that automatically prunes tool results to manage context size.
Default token counter that approximates based on character count.
Creates a production-ready ReAct (Reasoning + Acting) agent that combines language models with tools
Creates a middleware instance with automatic schema inference.
Detect credit card numbers in content (validated with Luhn algorithm)
Detect email addresses in content
Detect IP addresses in content (validated)
Detect MAC addresses in content
Detect URLs in content
Dynamic System Prompt Middleware
LangChain Messages
Creates a Human-in-the-Loop (HITL) middleware for tool approval and oversight.
Middleware for selecting tools using an LLM-based strategy.
Creates a middleware to limit the number of model calls at both thread and run levels.
Middleware that provides automatic model fallback on errors.
Middleware that automatically retries failed model calls with configurable backoff.
Provider specific middleware
Creates a middleware that detects and handles personally identifiable information (PII)
Creates a provider strategy for structured output using native JSON schema support.
Resolve a redaction rule to a concrete detector function
Summarization middleware that automatically summarizes conversation history when token limits are approached.
Creates a middleware that provides todo list management capabilities to agents.
LangChain Tools
Middleware that tracks tool call counts and enforces limits.
Middleware that emulates specified tools using an LLM instead of executing them.
Middleware that automatically retries failed tool calls with configurable backoff.
Creates a tool strategy for structured output using function calling.
LangChain Messages
Initialize a ChatModel from the model name and provider.
Load a LangChain module from a serialized text representation.
Get a unique name for the module, rather than parent class implementations.
Creates a middleware that detects and redacts personally identifiable information (PII)