Looking for the JS/TS version? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
pip install langchain-core
LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
The LangChain ecosystem is built on top of langchain-core. Some of the benefits:
For full documentation, see the API reference. For conceptual guides, tutorials, and examples on using LangChain, see the LangChain Docs. You can also chat with the docs using Chat LangChain.
See our Releases and Versioning policies.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.
Abstract interface for a key-value store.
In-memory implementation of the BaseStore using a dictionary.
In-memory store for any type of data.
In-memory store for bytes.
Raised when a key is invalid; e.g., uses incorrect characters.
LangSmith parameters for tracing.
Abstract base class for a document retrieval system.
Chat Session.
Base abstract class for inputs to any language model.
String prompt value.
Chat prompt value.
Image URL for multimodal model inputs (OpenAI format).
Image prompt value.
Chat prompt value which explicitly lists out the message types it accepts.
Interface for a caching layer for LLMs and Chat models.
Cache that stores things in memory.
Represents a request to execute an action by an agent.
Representation of an action to be executed by an agent.
Result of running an AgentAction.
Final return value of an ActionAgent.
Interface for cross encoder models.
Defines interface for IR translation using a visitor pattern.
Base class for all expressions.
Enumerator of the operations.
Enumerator of the comparison operators.
Filtering expression.
Comparison to a value.
Logical operation over other directives.
Structured query.
Abstract base class for storing chat message history.
In memory implementation of chat message history.
Base class for rate limiters.
An in memory rate limiter based on a token bucket algorithm.
Base class for chat loaders.
General LangChain exception.
Base class for exceptions in tracers module.
Exception that output parsers should raise to signify a parsing error.
Exception raised when input exceeds the model's context limit.
Error codes.
Raised when args_schema is missing or has an incorrect type annotation.
Exception thrown when a tool execution error occurs.
Base class for all LangChain tools.
Annotation for tool arguments that are injected at runtime.
Annotation for injecting the tool call ID.
Base class for toolkits containing related tools.
Tool that can operate on any number of inputs.
Tool that takes in function or coroutine directly.
Input to the retriever.
Abstract base class for parsing the outputs of a model.
Base class to parse the output of an LLM call.
Base class to parse the output of an LLM call.
Extract text content from model outputs as a string.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse an output using a Pydantic model.
Parse an output that is one of sets of values.
Parse an output as the JSON object.
Parse an output as the element of the JSON object.
Parse an output as a Pydantic object.
Parse an output as an attribute of a Pydantic object.
Base class for an output parser that can handle streaming input.
Base class for an output parser that can handle streaming input.
Parse an output using xml format.
Parse the output of a model to a list.
Parse the output of a model to a comma-separated list.
Parse a numbered list.
Parse a Markdown list.
Parse the output of an LLM call to a JSON object.
Reviver for JSON objects.
Base class for serialized objects.
Serialized constructor.
Serialized secret.
Serialized not implemented.
Serializable base class.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
Representation of a callable function to send to an LLM.
Representation of a callable function to the OpenAI API.
Custom exception for Chevron errors.
A string formatter that enforces keyword-only argument substitution.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
Async context manager to wrap an AsyncGenerator that has a aclose() method.
Raised when a request is blocked by SSRF protection policy.
httpx async transport that validates DNS results against an SSRF policy.
httpx sync transport that validates DNS results against an SSRF policy.
Immutable policy controlling which URLs/IPs are considered safe.
Runnable to passthrough inputs unchanged or with additional keys.
Runnable that assigns key-value pairs to dict[str, Any] inputs.
Runnable that picks keys from dict[str, Any] inputs.
Empty dict type.
Configuration for a Runnable.
ThreadPoolExecutor that copies the context to the child thread.
A unit of work that can be invoked, batched, streamed, transformed and composed.
Runnable that can be serialized to JSON.
Sequence of Runnable objects, where the output of one is the input of the next.
Runnable that runs a mapping of Runnables in parallel.
Runnable that runs a generator function.
RunnableLambda converts a python callable into a Runnable.
RunnableEachBase class.
RunnableEach class.
Runnable that delegates calls to another Runnable with a set of **kwargs.
Wrap a Runnable with additional functionality.
Check if a name is a local dict.
Check if the first argument of a function is a dict.
Get nonlocal variables accessed.
Get the nonlocal variables accessed of a function.
Get the source code of a lambda function.
Dictionary that can be added to another dictionary.
Protocol for objects that support addition.
Field that can be configured by the user.
Field that can be configured by the user with a default value.
Field that can be configured by the user with multiple default values.
Field that can be configured by the user. It is a specification of a field.
VertexViewer class.
Class for drawing in ASCII.
Serializable Runnable that can be dynamically configured.
Runnable that can be dynamically configured.
String enum.
Runnable that can be dynamically configured.
Runnable that selects which branch to run based on a condition.
Parameters for tenacity.wait_exponential_jitter.
Retry a Runnable if it fails.
Helper class to draw a state graph into a PNG file.
Runnable that manages chat message history for another Runnable.
Protocol for objects that can be converted to a string.
Dictionary of labels for nodes and edges in a graph.
Edge in a graph.
Node in a graph.
Branch in a graph.
Enum for different curve styles supported by Mermaid.
Schema for Hexadecimal color codes for different node types.
Enum for different draw methods supported by Mermaid.
Graph of nodes and edges.
Router input.
Runnable that routes to a set of Runnable based on Input['key'].
Data associated with a streaming event.
Streaming event.
A standard stream event that follows LangChain convention for event data.
Custom stream event created by the user.
Runnable that can fallback to other Runnable objects if it fails.
Mixin for Retriever callbacks.
Mixin for LLM callbacks.
Mixin for chain callbacks.
Mixin for tool callbacks.
Mixin for callback manager.
Mixin for run manager.
Base callback handler.
Base async callback handler.
Base callback manager.
Callback handler that writes to a file.
Base class for run manager (a bound callback manager).
Synchronous run manager.
Synchronous parent run manager.
Async run manager.
Async parent run manager.
Callback manager for LLM run.
Async callback manager for LLM run.
Callback manager for chain run.
Async callback manager for chain run.
Callback manager for tool run.
Async callback manager for tool run.
Callback manager for retriever run.
Async callback manager for retriever run.
Callback manager for LangChain.
Callback manager for the chain group.
Async callback manager that handles callbacks from LangChain.
Async callback manager for the chain group.
Callback handler that prints to std out.
Callback handler for streaming.
Callback Handler that tracks AIMessage.usage_metadata.
Base LLM abstract interface.
Simple interface for implementing a custom LLM.
LangSmith parameters for tracing.
Abstract base class for interfacing with language models.
Fake chat model for testing purposes.
Fake error for testing purposes.
Fake chat model for testing purposes.
Fake Chat Model wrapper for testing purposes.
Generic fake chat model that can be used to test the chat model interface.
Generic fake chat model that can be used to test the chat model interface.
Base class for chat models.
Simplified implementation for a chat model to inherit from.
Model profile.
Fake LLM for testing purposes.
Fake error for testing purposes.
Fake streaming list LLM for testing purposes.
Base interface for tracers.
Async base interface for tracers.
Information about a run.
Tracer that calls listeners on run start, end, and error.
Async tracer that calls listeners on run start, end, and error.
Tracer that runs a run evaluator whenever a run is persisted.
Tracer that calls a function with a single str parameter.
Tracer that prints to the console.
A single entry in the run log.
State of the run.
Patch to the run log.
Run log.
Tracer that streams run logs to a stream.
Tracer that collects all nested runs in a list.
Implementation of the SharedTracer that POSTS to the LangChain endpoint.
Interface for embedding models.
Fake embedding model for unit testing purposes.
Deterministic fake embedding model for unit testing purposes.
Breakdown of input token counts.
Breakdown of output token counts.
Usage metadata for a message, such as token counts.
Message from an AI.
Message chunk from an AI (yielded when streaming).
String-like object that supports both property and method access patterns.
Base abstract message class.
Message chunk, which can be concatenated with other Message chunks.
Annotation for citing data from a document.
Provider-specific annotation format.
Text output from a LLM.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Allowance for errors made by LLM.
Tool call that is executed server-side.
A chunk of a server-side tool call (yielded when streaming).
Result of a server-side tool call.
Reasoning output from a LLM.
Image data.
Video data.
Audio data.
Plaintext data (e.g., from a .txt or .md document).
File data that doesn't fit into other multimodal block types.
Provider-specific content data.
Message that can be assigned an arbitrary speaker (i.e. role).
Chat Message chunk.
Message from the user.
Human Message chunk.
Mixin for objects that tools can return directly.
Message for passing the result of executing a tool back to a model.
Tool Message chunk.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Message for passing the result of executing a tool back to a model.
Function Message chunk.
Message for priming AI behavior.
System Message chunk.
Message responsible for deleting other messages.
Interface for selecting examples to include in prompts.
Select examples based on length.
Select examples based on semantic similarity.
Select examples based on Max Marginal Relevance.
Abstract base class representing the interface for a record manager.
An in-memory record manager for testing purposes.
A generic response for upsert operations.
A generic response for delete operation.
A document retriever that supports indexing operations.
In memory document index.
Raised when an indexing operation fails.
Return a detailed a breakdown of the result of the indexing operation.
A class for issuing deprecation warnings for LangChain users.
A class for issuing deprecation warnings for LangChain users.
A class for issuing beta warnings for LangChain users.
Interface for vector store.
Base Retriever class for VectorStore.
In-memory vector store implementation.
Interface for document loader.
Abstract interface for blob parsers.
Abstract interface for blob loaders implementation.
Load LangSmith Dataset examples as Document objects.
Base class for message prompt templates.
Base class for all prompt templates, returning a prompt.
Prompt template that assumes variable is already list of messages.
Base class for message prompt templates that use a string prompt template.
Chat message prompt template.
Human message prompt template.
AI message prompt template.
System message prompt template.
Base class for chat prompt templates.
Prompt template for chat models.
Structured prompt template for a language model.
Prompt template that contains few shot examples.
String prompt that exposes the format method, returning a prompt.
Image prompt template for a multimodal model.
Template represented by a dictionary.
Prompt template for a language model.
Prompt template that contains few shot examples.
Chat prompt template that supports few-shot examples.
Base class for content used in retrieval and data processing workflows.
Raw data abstraction for document loading and file processing.
Class for storing a piece of text and associated metadata.
Abstract base class for document transformation.
Base class for document compressors.
A container for results of an LLM call.
A single chat generation output.
ChatGeneration chunk.
Class that contains metadata for a single execution of a chain or model.
Use to represent the result of a chat model call with a single prompt.
A single text generation output.
GenerationChunk, which can be concatenated with other Generation chunks.
Format standard data content block to format expected by OpenAI.