Looking for the JS/TS version? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
pip install langchain-core
LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
The LangChain ecosystem is built on top of langchain-core. Some of the benefits:
For full documentation, see the API reference. For conceptual guides, tutorials, and examples on using LangChain, see the LangChain Docs. You can also chat with the docs using Chat LangChain.
See our Releases and Versioning policies.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.
Interface for cross encoder models.
Base class for rate limiters.
An in memory rate limiter based on a token bucket algorithm.
General LangChain exception.
Base class for exceptions in tracers module.
Exception that output parsers should raise to signify a parsing error.
Exception raised when input exceeds the model's context limit.
Error codes.
LangSmith parameters for tracing.
Abstract base class for a document retrieval system.
Represents a request to execute an action by an agent.
Representation of an action to be executed by an agent.
Result of running an AgentAction.
Final return value of an ActionAgent.
Interface for a caching layer for LLMs and Chat models.
Cache that stores things in memory.
Abstract base class for storing chat message history.
In memory implementation of chat message history.
Defines interface for IR translation using a visitor pattern.
Base class for all expressions.
Enumerator of the operations.
Enumerator of the comparison operators.
Filtering expression.
Comparison to a value.
Logical operation over other directives.
Structured query.
Base abstract class for inputs to any language model.
String prompt value.
Chat prompt value.
Image URL for multimodal model inputs (OpenAI format).
Image prompt value.
Chat prompt value which explicitly lists out the message types it accepts.
Abstract interface for a key-value store.
In-memory implementation of the BaseStore using a dictionary.
In-memory store for any type of data.
In-memory store for bytes.
Raised when a key is invalid; e.g., uses incorrect characters.
Base class for chat loaders.
Chat Session.
Mixin for objects that tools can return directly.
Message for passing the result of executing a tool back to a model.
Tool Message chunk.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Annotation for citing data from a document.
Provider-specific annotation format.
Text output from a LLM.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Allowance for errors made by LLM.
Tool call that is executed server-side.
A chunk of a server-side tool call (yielded when streaming).
Result of a server-side tool call.
Reasoning output from a LLM.
Image data.
Video data.
Audio data.
Plaintext data (e.g., from a .txt or .md document).
File data that doesn't fit into other multimodal block types.
Provider-specific content data.
Breakdown of input token counts.
Breakdown of output token counts.
Usage metadata for a message, such as token counts.
Message from an AI.
Message chunk from an AI (yielded when streaming).
Message from the user.
Human Message chunk.
Message responsible for deleting other messages.
Message for priming AI behavior.
System Message chunk.
String-like object that supports both property and method access patterns.
Base abstract message class.
Message chunk, which can be concatenated with other Message chunks.
Message that can be assigned an arbitrary speaker (i.e. role).
Chat Message chunk.
Message for passing the result of executing a tool back to a model.
Function Message chunk.
Interface for document loader.
Abstract interface for blob parsers.
Abstract interface for blob loaders implementation.
Load LangSmith Dataset examples as Document objects.
Raised when an indexing operation fails.
Return a detailed a breakdown of the result of the indexing operation.
Abstract base class representing the interface for a record manager.
An in-memory record manager for testing purposes.
A generic response for upsert operations.
A generic response for delete operation.
A document retriever that supports indexing operations.
In memory document index.
Reviver for JSON objects.
Base class for serialized objects.
Serialized constructor.
Serialized secret.
Serialized not implemented.
Serializable base class.
Structured prompt template for a language model.
Base class for message prompt templates.
String prompt that exposes the format method, returning a prompt.
Prompt template that contains few shot examples.
Chat prompt template that supports few-shot examples.
Prompt template for a language model.
Template represented by a dictionary.
Prompt template that contains few shot examples.
Base class for all prompt templates, returning a prompt.
Image prompt template for a multimodal model.
Prompt template that assumes variable is already list of messages.
Base class for message prompt templates that use a string prompt template.
Chat message prompt template.
Human message prompt template.
AI message prompt template.
System message prompt template.
Base class for chat prompt templates.
Prompt template for chat models.
Runnable to passthrough inputs unchanged or with additional keys.
Runnable that assigns key-value pairs to dict[str, Any] inputs.
Runnable that picks keys from dict[str, Any] inputs.
Protocol for objects that can be converted to a string.
Dictionary of labels for nodes and edges in a graph.
Edge in a graph.
Node in a graph.
Branch in a graph.
Enum for different curve styles supported by Mermaid.
Schema for Hexadecimal color codes for different node types.
Enum for different draw methods supported by Mermaid.
Graph of nodes and edges.
VertexViewer class.
Class for drawing in ASCII.
Check if a name is a local dict.
Check if the first argument of a function is a dict.
Get nonlocal variables accessed.
Get the nonlocal variables accessed of a function.
Get the source code of a lambda function.
Dictionary that can be added to another dictionary.
Protocol for objects that support addition.
Field that can be configured by the user.
Field that can be configured by the user with a default value.
Field that can be configured by the user with multiple default values.
Field that can be configured by the user. It is a specification of a field.
Serializable Runnable that can be dynamically configured.
Runnable that can be dynamically configured.
String enum.
Runnable that can be dynamically configured.
Runnable that can fallback to other Runnable objects if it fails.
Router input.
Runnable that routes to a set of Runnable based on Input['key'].
Helper class to draw a state graph into a PNG file.
Runnable that selects which branch to run based on a condition.
Runnable that manages chat message history for another Runnable.
A unit of work that can be invoked, batched, streamed, transformed and composed.
Runnable that can be serialized to JSON.
Sequence of Runnable objects, where the output of one is the input of the next.
Runnable that runs a mapping of Runnables in parallel.
Runnable that runs a generator function.
RunnableLambda converts a python callable into a Runnable.
RunnableEachBase class.
RunnableEach class.
Runnable that delegates calls to another Runnable with a set of **kwargs.
Wrap a Runnable with additional functionality.
Empty dict type.
Configuration for a Runnable.
ThreadPoolExecutor that copies the context to the child thread.
Parameters for tenacity.wait_exponential_jitter.
Retry a Runnable if it fails.
Data associated with a streaming event.
Streaming event.
A standard stream event that follows LangChain convention for event data.
Custom stream event created by the user.
Fake embedding model for unit testing purposes.
Deterministic fake embedding model for unit testing purposes.
Interface for embedding models.
A container for results of an LLM call.
A single text generation output.
GenerationChunk, which can be concatenated with other Generation chunks.
A single chat generation output.
ChatGeneration chunk.
Class that contains metadata for a single execution of a chain or model.
Use to represent the result of a chat model call with a single prompt.
Interface for vector store.
Base Retriever class for VectorStore.
In-memory vector store implementation.
Tracer that calls a function with a single str parameter.
Tracer that prints to the console.
Tracer that runs a run evaluator whenever a run is persisted.
A single entry in the run log.
State of the run.
Patch to the run log.
Run log.
Tracer that streams run logs to a stream.
Tracer that calls listeners on run start, end, and error.
Async tracer that calls listeners on run start, end, and error.
Tracer that collects all nested runs in a list.
Implementation of the SharedTracer that POSTS to the LangChain endpoint.
Information about a run.
Base interface for tracers.
Async base interface for tracers.
Custom exception for Chevron errors.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
Async context manager to wrap an AsyncGenerator that has a aclose() method.
Representation of a callable function to send to an LLM.
Representation of a callable function to the OpenAI API.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
A string formatter that enforces keyword-only argument substitution.
Tool that can operate on any number of inputs.
Tool that takes in function or coroutine directly.
Input to the retriever.
Raised when args_schema is missing or has an incorrect type annotation.
Exception thrown when a tool execution error occurs.
Base class for all LangChain tools.
Annotation for tool arguments that are injected at runtime.
Annotation for injecting the tool call ID.
Base class for toolkits containing related tools.
Abstract base class for document transformation.
Base class for content used in retrieval and data processing workflows.
Raw data abstraction for document loading and file processing.
Class for storing a piece of text and associated metadata.
Base class for document compressors.
Callback handler for streaming.
Callback handler that prints to std out.
Base class for run manager (a bound callback manager).
Synchronous run manager.
Synchronous parent run manager.
Async run manager.
Async parent run manager.
Callback manager for LLM run.
Async callback manager for LLM run.
Callback manager for chain run.
Async callback manager for chain run.
Callback manager for tool run.
Async callback manager for tool run.
Callback manager for retriever run.
Async callback manager for retriever run.
Callback manager for LangChain.
Callback manager for the chain group.
Async callback manager that handles callbacks from LangChain.
Async callback manager for the chain group.
Callback handler that writes to a file.
Mixin for Retriever callbacks.
Mixin for LLM callbacks.
Mixin for chain callbacks.
Mixin for tool callbacks.
Mixin for callback manager.
Mixin for run manager.
Base callback handler.
Base async callback handler.
Base callback manager.
Callback Handler that tracks AIMessage.usage_metadata.
A class for issuing beta warnings for LangChain users.
A class for issuing deprecation warnings for LangChain users.
A class for issuing deprecation warnings for LangChain users.
Select examples based on semantic similarity.
Select examples based on Max Marginal Relevance.
Select examples based on length.
Interface for selecting examples to include in prompts.
Parse an output using a Pydantic model.
Extract text content from model outputs as a string.
Parse the output of an LLM call to a JSON object.
Base class for an output parser that can handle streaming input.
Base class for an output parser that can handle streaming input.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse an output using xml format.
Abstract base class for parsing the outputs of a model.
Base class to parse the output of an LLM call.
Base class to parse the output of an LLM call.
Parse the output of a model to a list.
Parse the output of a model to a comma-separated list.
Parse a numbered list.
Parse a Markdown list.
Parse an output that is one of sets of values.
Parse an output as the JSON object.
Parse an output as the element of the JSON object.
Parse an output as a Pydantic object.
Parse an output as an attribute of a Pydantic object.
Model profile.
Base LLM abstract interface.
Simple interface for implementing a custom LLM.
LangSmith parameters for tracing.
Abstract base class for interfacing with language models.
Base class for chat models.
Simplified implementation for a chat model to inherit from.
Fake LLM for testing purposes.
Fake error for testing purposes.
Fake streaming list LLM for testing purposes.
Fake chat model for testing purposes.
Fake error for testing purposes.
Fake chat model for testing purposes.
Fake Chat Model wrapper for testing purposes.
Generic fake chat model that can be used to test the chat model interface.
Generic fake chat model that can be used to test the chat model interface.
Create a message with a link to the LangChain troubleshooting guide.
Get information about the LangChain runtime environment.
Set a new value for the verbose global setting.
Get the value of the verbose global setting.
Set a new value for the debug global setting.
Get the value of the debug global setting.
Set a new LLM cache, overwriting the previous value, if any.
Get the value of the llm_cache global setting.
Print information about the environment for debugging purposes.
Import an attribute from a module located in a package.
Convert a sequence of messages to strings and concatenate them into one string.
Convert a sequence of messages from dicts to Message objects.
Convert a message chunk to a Message.
Convert a sequence of messages to a list of messages.
Filter messages based on name, type or id.
Merge consecutive Messages of the same type.
Trim messages to be below a token count.
Convert LangChain messages into OpenAI message dicts.
Approximate the total number of tokens in messages.
Create a tool call.
Create a tool call chunk.
Create an invalid tool call.
Best-effort parsing of tools.
Best-effort parsing of tool chunks.
Check if the provided content block is a data content block.
Create a TextContentBlock.
Create an ImageContentBlock.
Create a VideoContentBlock.
Create an AudioContentBlock.
Create a FileContentBlock.
Create a PlainTextContentBlock.
Create a ToolCall.
Create a ReasoningContentBlock.
Create a Citation.
Create a NonStandardContentBlock.
Add multiple AIMessageChunks together.
Recursively add two UsageMetadata objects.
Recursively subtract two UsageMetadata objects.
Merge multiple message contents.
Convert a Message to a dictionary.
Convert a sequence of Messages to a list of dictionaries.
Get a title representation for a message.
Register content translators for a provider in PROVIDER_TRANSLATORS.
Get the translator functions for a provider.
Derive standard content blocks from a message with groq content.
Derive standard content blocks from a message chunk with groq content.
Convert ImageContentBlock to format expected by OpenAI Chat Completions.
Format standard data content block to format expected by OpenAI.
Derive standard content blocks from a message with OpenAI content.
Derive standard content blocks from a message chunk with OpenAI content.
Derive standard content blocks from a message with Bedrock content.
Derive standard content blocks from a message chunk with Bedrock content.
Derive standard content blocks from a message with Anthropic content.
Derive standard content blocks from a message chunk with Anthropic content.
Derive standard content blocks from a message with Bedrock Converse content.
Derive standard content blocks from a chunk with Bedrock Converse content.
Translate Google AI grounding metadata to LangChain Citations.
Derive standard content blocks from a message with Google (GenAI) content.
Derive standard content blocks from a chunk with Google (GenAI) content.
Index data from the loader into the vector store.
Async index data from the loader into the vector store.
Default init validator that blocks jinja2 templates.
Revive a LangChain class from a JSON string.
Revive a LangChain class from a JSON object.
Try to determine if a value is different from the default.
Serialize a "not implemented" object.
Return a default value for an object.
Return a JSON string representation of an object.
Return a dict representation of an object.
Format a template using jinja2.
Validate that the input variables are valid for the template.
Format a template using mustache.
Get the top-level variables from a mustache template.
Get the variables from a mustache template.
Check that template string is valid.
Get the variables from the template.
Return True if child is subsequence of parent.
Format a document into a string based on a prompt template.
Async format a document into a string based on a prompt template.
Identity function.
Async identity function.
Check if a string is a valid UUID.
Convert the data of a node to a string.
Convert the data of a node to a JSON-serializable format.
Build a DAG and draw it in ASCII.
Run a coroutine with a semaphore.
Gather coroutines with a limit on the number of concurrent coroutines.
Check if a callable accepts a run_manager argument.
Check if a callable accepts a config argument.
Check if a callable accepts a context argument.
Check if asyncio.create_task accepts a context arg.
Await a coroutine with a context.
Get the keys of the first argument of a function if it is a dict.
Get the source code of a lambda function.
Get the nonlocal variables accessed by a function.
Indent all lines of text after the first line.
Add a sequence of addable objects together.
Asynchronously add a sequence of addable objects together.
Get the unique config specs from a sequence of config specs.
Check if a function is an async generator.
Check if a function is async.
Prefix the id of a ConfigurableFieldSpec.
Make options spec.
Draws a Mermaid graph using the provided graph data.
Draws a Mermaid graph as PNG using provided syntax.
Coerce a Runnable-like object into a Runnable.
Decorate a function to make it a Runnable.
Set the child Runnable config + tracing context.
Ensure that a config is a dict with all keys present.
Get a list of configs from a single config or a list of configs.
Patch a config with new values.
Merge multiple configs into one.
Call function that may optionally accept a run_manager and/or config.
Async call function that may optionally accept a run_manager and/or config.
Get a callback manager for a config.
Get an async callback manager for a config.
Get an executor for a config.
Run a function in an executor.
Merge a list of ChatGenerationChunks into a single ChatGenerationChunk.
Calculate maximal marginal relevance.
Try to stringify an object to JSON.
Get the elapsed time of a run.
Wait for all tracers to finish.
Convert run to dict, compatible with both Pydantic v1 and v2.
Copy run, compatible with both Pydantic v1 and v2.
Construct run without validation, compatible with both Pydantic v1 and v2.
Convert any Pydantic model to dict, compatible with both v1 and v2.
Copy any Pydantic model, compatible with both v1 and v2.
Log an error once.
Wait for all tracers to finish.
Get the client.
Instruct LangChain to log all runs in context to LangSmith.
Collect all run traces in context.
Register a configure hook.
Check if an IP address is in a private range.
Check if hostname or IP is a cloud metadata endpoint.
Check if hostname or IP is localhost.
Validate a URL for SSRF protection.
Check if a URL is safe (non-throwing version of validate_safe_url).
Check if the given class is Pydantic v1-like.
Check if the given class is Pydantic v2-like.
Check if the given class is a subclass of Pydantic BaseModel.
Check if the given class is an instance of Pydantic BaseModel.
Decorator to run a function before model initialization.
Return the field names of a Pydantic model.
Create a Pydantic model with the given field definitions.
Create a Pydantic model with the given field definitions.
Determine if running within IPython or Jupyter.
Stringify a value.
Stringify a dictionary.
Convert an iterable to a comma-separated string.
Sanitize text by removing NUL bytes that are incompatible with PostgreSQL.
Validate specified keyword args are mutually exclusive.
Raise an error with the response text.
Context manager for mocking out datetime.now() in unit tests.
Dynamically import a module.
Check the version of a package.
Get field names, including aliases, for a pydantic class.
Build extra kwargs from values and extra_kwargs.
Convert a string to a SecretStr if needed.
Create a factory method that gets a value from an environment variable.
Secret from env.
Ensure the ID is a valid string, generating a new UUID if not provided.
Check if an environment variable is set.
Get a value from a dictionary or an environment variable.
Get a value from a dictionary or an environment variable.
Parse a literal from the template.
Do a preliminary check to see if a tag could be a standalone.
Do a final check to see if a tag could be a standalone.
Parse a tag from a template.
Tokenize a mustache template.
Render a mustache template.
Resolve and inline JSON Schema $ref references in a schema object.
An individual iterator of a tee.
Utility batching function for async iterables.
Parse a JSON string that may be missing closing braces.
Parse a JSON string from a Markdown string.
Parse and check a JSON string from a Markdown string.
Extract all links from a raw HTML string.
Extract all links from a raw HTML string and convert into absolute paths.
Convert a raw function/class to an OpenAI function.
Convert a tool-like object to an OpenAI tool schema.
Convert a schema representation to a JSON schema.
Convert an example into a list of messages that can be fed into an LLM.
An individual iterator of a .tee.
Utility batching function.
Get mapping for items to a support color.
Get colored text.
Get bolded text.
Print text with highlighting and no end characters.
Generate a UUID from a Unix timestamp in nanoseconds and random bits.
Merge dictionaries.
Add many lists, handling None.
Merge two objects.
Render the tool name and description in plain text.
Render the tool name, description, and args in plain text.
Convert Python functions and Runnables to LangChain tools.
Convert a Runnable into a BaseTool.
Create a tool to do retrieval of documents.
Create a Pydantic schema from a function's signature.
Get all annotations from a Pydantic BaseModel and its parents.
Get a callback manager for a chain group in a context manager.
Get an async callback manager for a chain group in a context manager.
Makes so an awaitable method is always shielded from cancellation.
Generic event handler for CallbackManager.
Async generic event handler for AsyncCallbackManager.
Dispatch an adhoc event to the handlers.
Dispatch an adhoc event.
Get usage metadata callback.
Return whether the caller at depth of this function is internal.
Decorator to mark a function, a class, or a property as beta.
Context manager to suppress LangChainDeprecationWarning.
Display a standardized beta annotation.
Unmute LangChain beta warnings.
Get the path of the file as a relative path to the package directory.
Path of the file as a LangChain import exclude langchain top namespace.
Decorator to mark a function, a class, or a property as deprecated.
Context manager to suppress LangChainDeprecationWarning.
Display a standardized deprecation.
Unmute LangChain deprecation warnings.
Decorator indicating that parameter old of func is renamed to new.
Return a list of values in dict sorted by key.
Parse a single tool call.
Create an InvalidToolCall from a raw tool call.
Parse a list of tool calls.
Get nested element from path.
Drop the last n elements of an iterator.
Create a retry decorator for a given LLM and provided a list of error types.
Get prompts that are already cached.
Get prompts that are already cached. Async version.
Update the cache and get the LLM output.
Update the cache and get the LLM output. Async version.
Check whether a block contains multimodal data in OpenAI Chat Completions format.
Get a GPT-2 tokenizer instance.
Generate from a stream.
Async generate from a stream.
Load prompt from config dict.
Unified method for loading a prompt from LangChainHub or local filesystem.
DEPRECATED - Get the major version of Pydantic.
Pure-Python implementation of anext() for testing purposes.
langchain-core defines the base abstractions for the LangChain ecosystem.
Cross Encoder interface.
Interface for a rate limiter and an in-memory rate limiter.
Custom exceptions for LangChain.
Utilities for getting information about the runtime environment.
langchain-core version information and utilities.
Retriever class returns Document objects given a text query.
Schema definitions for representing agent actions, observations, and return values.
Optional caching layer for language models.
Chat message history stores a history of the message interactions in a chat.
Internal representation of a structured query language.
Global values and configuration that apply to all of LangChain.
Prompt values for language model prompts.
Store implements the key-value stores and storage helpers.
Chat loaders.
Print information about the system and langchain packages for debugging purposes.
Chat Sessions are a collection of messages and function calls.
Messages are objects used in prompts and chat conversations.
Module contains utility functions for working with messages.
Messages for tools.
Standard, multimodal content blocks for Large Language Model I/O.
AI message.
Human message.
Message responsible for deleting other messages.
System message.
Base message.
Chat Message.
Function Message.
Derivations of standard content blocks from provider content.
Derivations of standard content blocks from LangChain v0 multimodal content.
Derivations of standard content blocks from Groq content.
Derivations of standard content blocks from OpenAI content.
Derivations of standard content blocks from Bedrock content.
Derivations of standard content blocks from Anthropic content.
Derivations of standard content blocks from Amazon (Bedrock Converse) content.
Derivations of standard content blocks from Google (VertexAI) content.
Derivations of standard content blocks from Google (GenAI) content.
Document loaders.
Abstract interface for document loader implementations.
Schema for Blobs and Blob Loaders.
LangSmith document loader.
Code to help indexing data into a vectorstore.
Module contains logic for indexing documents into vector stores.
Base classes for indexing.
In memory document index.
Load module helps with serialization and deserialization.
Load LangChain objects from JSON strings or objects.
Serializable base class.
Init validators for deserialization security.
Serialize LangChain objects to JSON.
Serialization mapping.
A prompt is the input to the model.
Structured prompt template for a language model.
Message prompt templates.
BasePrompt schema definition.
Prompt template that contains few shot examples.
Prompt schema definition.
Load prompts.
Dictionary prompt template.
Prompt template that contains few shot examples.
Base class for prompt templates.
Image prompt template for a multimodal model.
Chat prompt template.
LangChain Runnable and the LangChain Expression Language (LCEL).
Implementation of the RunnablePassthrough.
Graph used in Runnable objects.
Draws DAG in ASCII.
Utility code for Runnable objects.
Runnable objects that can be dynamically configured.
Runnable that can fallback to other Runnable objects if it fails.
Runnable that routes to a set of Runnable objects.
Mermaid graph drawing utilities.
Helper class to draw a state graph into a PNG file.
Runnable that selects which branch to run based on a condition.
Runnable that manages chat message history for another Runnable.
Base classes and utilities for Runnable objects.
Configuration utilities for Runnable objects.
Runnable that retries a Runnable if it fails.
Module contains typedefs that are used with Runnable objects.
Embeddings.
Module contains a few fake embedding models for testing purposes.
Embeddings interface.
Output classes.
LLMResult class.
Generation output schema.
Chat generation output classes.
RunInfo class.
Chat result schema.
Vector stores.
Internal utilities for the in memory implementation of VectorStore.
A vector store stores embedded data and performs vector search.
In-memory vector store.
Tracers are classes for tracing runs.
Tracers that print to the console.
Utilities for the root listener.
A tracer that runs evaluators over completed runs.
Tracer that streams run logs to a stream.
Tracers that call listeners.
A tracer that collects all nested runs in a list.
A tracer implementation that records to LangChain endpoint.
Schemas for tracers.
Internal tracer to power the event stream API.
Module implements a memory stream for communication between two co-routines.
Context management for tracers.
Base interfaces for tracing runs.
Utility functions for LangChain.
Utilities for pydantic.
Utilities for working with interactive environments.
String utilities.
Generic utility functions.
Utilities for environment variables.
Adapted from https://github.com/noahmorrison/chevron.
Utilities for JSON Schema.
Asynchronous iterator utilities.
Utilities for JSON.
Utilities for working with HTML.
Methods for creating function specs in the style of OpenAI Functions.
Utilities for working with iterators.
Handle chained inputs.
UUID utility functions.
Usage utilities.
Utilities for image processing.
Utilities for formatting strings.
Tools are classes that an Agent uses to interact with the world.
Structured tool.
Tool that takes in function or coroutine directly.
Utilities to render tools.
Convert functions and runnables to tools.
Retriever tool.
Base classes and utilities for LangChain tools.
Documents module for data retrieval and processing workflows.
Document transformers.
Base classes for media and documents.
Document compressor.
Callback handlers allow listening to events in LangChain.
Callback Handler streams to stdout on new llm token.
Callback handler that prints to std out.
Run managers.
Callback handler that writes to a file.
Base callback handler for LangChain.
Callback Handler that tracks AIMessage.usage_metadata.
Helper functions for marking parts of the LangChain API as beta.
Helper functions for deprecating parts of the LangChain API.
Example selectors.
Example selector that selects examples based on SemanticSimilarity.
Select examples based on length.
Interface for selecting examples to include in prompts.
OutputParser classes parse the output of an LLM call into structured data.
Output parsers using Pydantic.
String output parser.
Parser for JSON output.
Base classes for output parsers that can handle streaming input.
Parse tools for OpenAI tools output.
Output parser for XML format.
Base parser for language model outputs.
Parsers for list output.
Parsers for OpenAI functions output.
Format instructions.
Core language model abstractions.
Model profile types and utilities.
Base interface for traditional large language models (LLMs) to expose.
Base language models class.
Chat models for conversational AI.
Fake LLMs for testing purposes.
Fake chat models for testing purposes.
A type representing the various ways a message can be represented.
A union of all defined Annotation types.
A union of all defined multimodal data ContentBlock types.
A union of all defined ContentBlock types and aliases.
Input to a language model.
Output from a language model.