Looking for the JS/TS version? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
pip install langchain-core
LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
The LangChain ecosystem is built on top of langchain-core. Some of the benefits:
For full documentation, see the API reference. For conceptual guides, tutorials, and examples on using LangChain, see the LangChain Docs. You can also chat with the docs using Chat LangChain.
See our Releases and Versioning policies.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.
Defines interface for IR translation using a visitor pattern.
Base class for all expressions.
Enumerator of the operations.
Enumerator of the comparison operators.
Filtering expression.
Comparison to a value.
Logical operation over other directives.
Structured query.
Base class for rate limiters.
An in memory rate limiter based on a token bucket algorithm.
Base abstract class for inputs to any language model.
String prompt value.
Chat prompt value.
Image URL for multimodal model inputs (OpenAI format).
Image prompt value.
Chat prompt value which explicitly lists out the message types it accepts.
Chat Session.
Represents a request to execute an action by an agent.
Representation of an action to be executed by an agent.
Result of running an AgentAction.
Final return value of an ActionAgent.
General LangChain exception.
Base class for exceptions in tracers module.
Exception that output parsers should raise to signify a parsing error.
Exception raised when input exceeds the model's context limit.
Error codes.
Interface for a caching layer for LLMs and Chat models.
Cache that stores things in memory.
Abstract base class for storing chat message history.
In memory implementation of chat message history.
LangSmith parameters for tracing.
Abstract base class for a document retrieval system.
Base class for chat loaders.
Abstract interface for a key-value store.
In-memory implementation of the BaseStore using a dictionary.
In-memory store for any type of data.
In-memory store for bytes.
Raised when a key is invalid; e.g., uses incorrect characters.
Load LangSmith Dataset examples as Document objects.
Abstract interface for blob loaders implementation.
Interface for document loader.
Abstract interface for blob parsers.
A single text generation output.
GenerationChunk, which can be concatenated with other Generation chunks.
A container for results of an LLM call.
Class that contains metadata for a single execution of a chain or model.
A single chat generation output.
ChatGeneration chunk.
Use to represent the result of a chat model call with a single prompt.
A string formatter that enforces keyword-only argument substitution.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
Async context manager to wrap an AsyncGenerator that has a aclose() method.
Representation of a callable function to send to an LLM.
Representation of a callable function to the OpenAI API.
Dummy lock that provides the proper interface but no protection.
Create n separate asynchronous iterators over iterable.
Custom exception for Chevron errors.
Protocol for objects that can be converted to a string.
Dictionary of labels for nodes and edges in a graph.
Edge in a graph.
Node in a graph.
Branch in a graph.
Enum for different curve styles supported by Mermaid.
Schema for Hexadecimal color codes for different node types.
Enum for different draw methods supported by Mermaid.
Graph of nodes and edges.
Parameters for tenacity.wait_exponential_jitter.
Retry a Runnable if it fails.
Runnable to passthrough inputs unchanged or with additional keys.
Runnable that assigns key-value pairs to dict[str, Any] inputs.
Runnable that picks keys from dict[str, Any] inputs.
Serializable Runnable that can be dynamically configured.
Runnable that can be dynamically configured.
String enum.
Runnable that can be dynamically configured.
Check if a name is a local dict.
Check if the first argument of a function is a dict.
Get nonlocal variables accessed.
Get the nonlocal variables accessed of a function.
Get the source code of a lambda function.
Dictionary that can be added to another dictionary.
Protocol for objects that support addition.
Field that can be configured by the user.
Field that can be configured by the user with a default value.
Field that can be configured by the user with multiple default values.
Field that can be configured by the user. It is a specification of a field.
Runnable that selects which branch to run based on a condition.
Runnable that can fallback to other Runnable objects if it fails.
Runnable that manages chat message history for another Runnable.
VertexViewer class.
Class for drawing in ASCII.
Helper class to draw a state graph into a PNG file.
Router input.
Runnable that routes to a set of Runnable based on Input['key'].
Empty dict type.
Configuration for a Runnable.
ThreadPoolExecutor that copies the context to the child thread.
Data associated with a streaming event.
Streaming event.
A standard stream event that follows LangChain convention for event data.
Custom stream event created by the user.
A unit of work that can be invoked, batched, streamed, transformed and composed.
Runnable that can be serialized to JSON.
Sequence of Runnable objects, where the output of one is the input of the next.
Runnable that runs a mapping of Runnables in parallel.
Runnable that runs a generator function.
RunnableLambda converts a python callable into a Runnable.
RunnableEachBase class.
RunnableEach class.
Runnable that delegates calls to another Runnable with a set of **kwargs.
Wrap a Runnable with additional functionality.
Fake embedding model for unit testing purposes.
Deterministic fake embedding model for unit testing purposes.
Interface for embedding models.
In memory document index.
Raised when an indexing operation fails.
Return a detailed a breakdown of the result of the indexing operation.
Abstract base class representing the interface for a record manager.
An in-memory record manager for testing purposes.
A generic response for upsert operations.
A generic response for delete operation.
A document retriever that supports indexing operations.
Base class for document compressors.
Abstract base class for document transformation.
Base class for content used in retrieval and data processing workflows.
Raw data abstraction for document loading and file processing.
Class for storing a piece of text and associated metadata.
Reviver for JSON objects.
Base class for serialized objects.
Serialized constructor.
Serialized secret.
Serialized not implemented.
Serializable base class.
Breakdown of input token counts.
Breakdown of output token counts.
Usage metadata for a message, such as token counts.
Message from an AI.
Message chunk from an AI (yielded when streaming).
Mixin for objects that tools can return directly.
Message for passing the result of executing a tool back to a model.
Tool Message chunk.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Message for passing the result of executing a tool back to a model.
Function Message chunk.
Message that can be assigned an arbitrary speaker (i.e. role).
Chat Message chunk.
Message from the user.
Human Message chunk.
Message responsible for deleting other messages.
Annotation for citing data from a document.
Provider-specific annotation format.
Text output from a LLM.
Represents an AI's request to call a tool.
A chunk of a tool call (yielded when streaming).
Allowance for errors made by LLM.
Tool call that is executed server-side.
A chunk of a server-side tool call (yielded when streaming).
Result of a server-side tool call.
Reasoning output from a LLM.
Image data.
Video data.
Audio data.
Plaintext data (e.g., from a .txt or .md document).
File data that doesn't fit into other multimodal block types.
Provider-specific content data.
Message for priming AI behavior.
System Message chunk.
String-like object that supports both property and method access patterns.
Base abstract message class.
Message chunk, which can be concatenated with other Message chunks.
Parse the output of a model to a list.
Parse the output of a model to a comma-separated list.
Parse a numbered list.
Parse a Markdown list.
Base class for an output parser that can handle streaming input.
Base class for an output parser that can handle streaming input.
Extract text content from model outputs as a string.
Parse an output that is one of sets of values.
Parse an output as the JSON object.
Parse an output as the element of the JSON object.
Parse an output as a Pydantic object.
Parse an output as an attribute of a Pydantic object.
Parse an output using a Pydantic model.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse tools from OpenAI response.
Parse the output of an LLM call to a JSON object.
Parse an output using xml format.
Abstract base class for parsing the outputs of a model.
Base class to parse the output of an LLM call.
Base class to parse the output of an LLM call.
Tool that takes in function or coroutine directly.
Input to the retriever.
Tool that can operate on any number of inputs.
Raised when args_schema is missing or has an incorrect type annotation.
Exception thrown when a tool execution error occurs.
Base class for all LangChain tools.
Annotation for tool arguments that are injected at runtime.
Annotation for injecting the tool call ID.
Base class for toolkits containing related tools.
Tracer that calls listeners on run start, end, and error.
Async tracer that calls listeners on run start, end, and error.
Information about a run.
Tracer that collects all nested runs in a list.
Tracer that runs a run evaluator whenever a run is persisted.
Tracer that calls a function with a single str parameter.
Tracer that prints to the console.
Implementation of the SharedTracer that POSTS to the LangChain endpoint.
A single entry in the run log.
State of the run.
Patch to the run log.
Run log.
Tracer that streams run logs to a stream.
Base interface for tracers.
Async base interface for tracers.
Select examples based on length.
Select examples based on semantic similarity.
Select examples based on Max Marginal Relevance.
Interface for selecting examples to include in prompts.
Template represented by a dictionary.
Prompt template that assumes variable is already list of messages.
Base class for message prompt templates that use a string prompt template.
Chat message prompt template.
Human message prompt template.
AI message prompt template.
System message prompt template.
Base class for chat prompt templates.
Prompt template for chat models.
Structured prompt template for a language model.
Prompt template for a language model.
String prompt that exposes the format method, returning a prompt.
Prompt template that contains few shot examples.
Base class for message prompt templates.
Image prompt template for a multimodal model.
Prompt template that contains few shot examples.
Chat prompt template that supports few-shot examples.
Base class for all prompt templates, returning a prompt.
Callback handler for streaming.
Callback handler that writes to a file.
Callback handler that prints to std out.
Base class for run manager (a bound callback manager).
Synchronous run manager.
Synchronous parent run manager.
Async run manager.
Async parent run manager.
Callback manager for LLM run.
Async callback manager for LLM run.
Callback manager for chain run.
Async callback manager for chain run.
Callback manager for tool run.
Async callback manager for tool run.
Callback manager for retriever run.
Async callback manager for retriever run.
Callback manager for LangChain.
Callback manager for the chain group.
Async callback manager that handles callbacks from LangChain.
Async callback manager for the chain group.
Callback Handler that tracks AIMessage.usage_metadata.
Mixin for Retriever callbacks.
Mixin for LLM callbacks.
Mixin for chain callbacks.
Mixin for tool callbacks.
Mixin for callback manager.
Mixin for run manager.
Base callback handler.
Base async callback handler.
Base callback manager.
In-memory vector store implementation.
Interface for vector store.
Base Retriever class for VectorStore.
A class for issuing deprecation warnings for LangChain users.
A class for issuing deprecation warnings for LangChain users.
A class for issuing beta warnings for LangChain users.
Fake chat model for testing purposes.
Fake error for testing purposes.
Fake chat model for testing purposes.
Fake Chat Model wrapper for testing purposes.
Generic fake chat model that can be used to test the chat model interface.
Generic fake chat model that can be used to test the chat model interface.
Base LLM abstract interface.
Simple interface for implementing a custom LLM.
Fake LLM for testing purposes.
Fake error for testing purposes.
Fake streaming list LLM for testing purposes.
Base class for chat models.
Simplified implementation for a chat model to inherit from.
Model profile.
LangSmith parameters for tracing.
Abstract base class for interfacing with language models.
Print information about the environment for debugging purposes.
Get information about the LangChain runtime environment.
Create a message with a link to the LangChain troubleshooting guide.
Set a new value for the verbose global setting.
Get the value of the verbose global setting.
Set a new value for the debug global setting.
Get the value of the debug global setting.
Set a new LLM cache, overwriting the previous value, if any.
Get the value of the llm_cache global setting.
Import an attribute from a module located in a package.
Merge a list of ChatGenerationChunks into a single ChatGenerationChunk.
Validate specified keyword args are mutually exclusive.
Raise an error with the response text.
Context manager for mocking out datetime.now() in unit tests.
Dynamically import a module.
Check the version of a package.
Get field names, including aliases, for a pydantic class.
Build extra kwargs from values and extra_kwargs.
Convert a string to a SecretStr if needed.
Create a factory method that gets a value from an environment variable.
Secret from env.
Ensure the ID is a valid string, generating a new UUID if not provided.
Check if an environment variable is set.
Get a value from a dictionary or an environment variable.
Get a value from a dictionary or an environment variable.
Merge dictionaries.
Add many lists, handling None.
Merge two objects.
Resolve and inline JSON Schema $ref references in a schema object.
An individual iterator of a tee.
Utility batching function for async iterables.
Convert a raw function/class to an OpenAI function.
Convert a tool-like object to an OpenAI tool schema.
Convert a schema representation to a JSON schema.
Convert an example into a list of messages that can be fed into an LLM.
Determine if running within IPython or Jupyter.
Stringify a value.
Stringify a dictionary.
Convert an iterable to a comma-separated string.
Sanitize text by removing NUL bytes that are incompatible with PostgreSQL.
Check if the given class is Pydantic v1-like.
Check if the given class is Pydantic v2-like.
Check if the given class is a subclass of Pydantic BaseModel.
Check if the given class is an instance of Pydantic BaseModel.
Decorator to run a function before model initialization.
Return the field names of a Pydantic model.
Create a Pydantic model with the given field definitions.
Create a Pydantic model with the given field definitions.
Parse a JSON string that may be missing closing braces.
Parse a JSON string from a Markdown string.
Parse and check a JSON string from a Markdown string.
Get mapping for items to a support color.
Get colored text.
Get bolded text.
Print text with highlighting and no end characters.
An individual iterator of a .tee.
Utility batching function.
Extract all links from a raw HTML string.
Extract all links from a raw HTML string and convert into absolute paths.
Parse a literal from the template.
Do a preliminary check to see if a tag could be a standalone.
Do a final check to see if a tag could be a standalone.
Parse a tag from a template.
Tokenize a mustache template.
Render a mustache template.
Generate a UUID from a Unix timestamp in nanoseconds and random bits.
Check if a string is a valid UUID.
Convert the data of a node to a string.
Convert the data of a node to a JSON-serializable format.
Identity function.
Async identity function.
Prefix the id of a ConfigurableFieldSpec.
Make options spec.
Run a coroutine with a semaphore.
Gather coroutines with a limit on the number of concurrent coroutines.
Check if a callable accepts a run_manager argument.
Check if a callable accepts a config argument.
Check if a callable accepts a context argument.
Check if asyncio.create_task accepts a context arg.
Await a coroutine with a context.
Get the keys of the first argument of a function if it is a dict.
Get the source code of a lambda function.
Get the nonlocal variables accessed by a function.
Indent all lines of text after the first line.
Add a sequence of addable objects together.
Asynchronously add a sequence of addable objects together.
Get the unique config specs from a sequence of config specs.
Check if a function is an async generator.
Check if a function is async.
Draws a Mermaid graph using the provided graph data.
Draws a Mermaid graph as PNG using provided syntax.
Build a DAG and draw it in ASCII.
Set the child Runnable config + tracing context.
Ensure that a config is a dict with all keys present.
Get a list of configs from a single config or a list of configs.
Patch a config with new values.
Merge multiple configs into one.
Call function that may optionally accept a run_manager and/or config.
Async call function that may optionally accept a run_manager and/or config.
Get a callback manager for a config.
Get an async callback manager for a config.
Get an executor for a config.
Run a function in an executor.
Coerce a Runnable-like object into a Runnable.
Decorate a function to make it a Runnable.
Index data from the loader into the vector store.
Async index data from the loader into the vector store.
Default init validator that blocks jinja2 templates.
Revive a LangChain class from a JSON string.
Revive a LangChain class from a JSON object.
Return a default value for an object.
Return a JSON string representation of an object.
Return a dict representation of an object.
Try to determine if a value is different from the default.
Serialize a "not implemented" object.
Add multiple AIMessageChunks together.
Recursively add two UsageMetadata objects.
Recursively subtract two UsageMetadata objects.
Convert a sequence of messages to strings and concatenate them into one string.
Convert a sequence of messages from dicts to Message objects.
Convert a message chunk to a Message.
Convert a sequence of messages to a list of messages.
Filter messages based on name, type or id.
Merge consecutive Messages of the same type.
Trim messages to be below a token count.
Convert LangChain messages into OpenAI message dicts.
Approximate the total number of tokens in messages.
Create a tool call.
Create a tool call chunk.
Create an invalid tool call.
Best-effort parsing of tools.
Best-effort parsing of tool chunks.
Check if the provided content block is a data content block.
Create a TextContentBlock.
Create an ImageContentBlock.
Create a VideoContentBlock.
Create an AudioContentBlock.
Create a FileContentBlock.
Create a PlainTextContentBlock.
Create a ToolCall.
Create a ReasoningContentBlock.
Create a Citation.
Create a NonStandardContentBlock.
Merge multiple message contents.
Convert a Message to a dictionary.
Convert a sequence of Messages to a list of dictionaries.
Get a title representation for a message.
Register content translators for a provider in PROVIDER_TRANSLATORS.
Get the translator functions for a provider.
Convert ImageContentBlock to format expected by OpenAI Chat Completions.
Format standard data content block to format expected by OpenAI.
Derive standard content blocks from a message with OpenAI content.
Derive standard content blocks from a message chunk with OpenAI content.
Translate Google AI grounding metadata to LangChain Citations.
Derive standard content blocks from a message with Google (GenAI) content.
Derive standard content blocks from a chunk with Google (GenAI) content.
Derive standard content blocks from a message with Bedrock Converse content.
Derive standard content blocks from a chunk with Bedrock Converse content.
Derive standard content blocks from a message with Bedrock content.
Derive standard content blocks from a message chunk with Bedrock content.
Derive standard content blocks from a message with groq content.
Derive standard content blocks from a message chunk with groq content.
Derive standard content blocks from a message with Anthropic content.
Derive standard content blocks from a message chunk with Anthropic content.
Drop the last n elements of an iterator.
Parse a single tool call.
Create an InvalidToolCall from a raw tool call.
Parse a list of tool calls.
Get nested element from path.
Convert Python functions and Runnables to LangChain tools.
Convert a Runnable into a BaseTool.
Create a tool to do retrieval of documents.
Render the tool name and description in plain text.
Render the tool name, description, and args in plain text.
Create a Pydantic schema from a function's signature.
Get all annotations from a Pydantic BaseModel and its parents.
Check if an IP address is in a private range.
Check if hostname or IP is a cloud metadata endpoint.
Check if hostname or IP is localhost.
Validate a URL for SSRF protection.
Check if a URL is safe (non-throwing version of validate_safe_url).
Instruct LangChain to log all runs in context to LangSmith.
Collect all run traces in context.
Register a configure hook.
Wait for all tracers to finish.
Try to stringify an object to JSON.
Get the elapsed time of a run.
Log an error once.
Wait for all tracers to finish.
Get the client.
Convert run to dict, compatible with both Pydantic v1 and v2.
Copy run, compatible with both Pydantic v1 and v2.
Construct run without validation, compatible with both Pydantic v1 and v2.
Convert any Pydantic model to dict, compatible with both v1 and v2.
Copy any Pydantic model, compatible with both v1 and v2.
Return a list of values in dict sorted by key.
Load prompt from config dict.
Unified method for loading a prompt from LangChainHub or local filesystem.
Format a template using jinja2.
Validate that the input variables are valid for the template.
Format a template using mustache.
Get the top-level variables from a mustache template.
Get the variables from a mustache template.
Check that template string is valid.
Get the variables from the template.
Return True if child is subsequence of parent.
Format a document into a string based on a prompt template.
Async format a document into a string based on a prompt template.
Get a callback manager for a chain group in a context manager.
Get an async callback manager for a chain group in a context manager.
Makes so an awaitable method is always shielded from cancellation.
Generic event handler for CallbackManager.
Async generic event handler for AsyncCallbackManager.
Dispatch an adhoc event to the handlers.
Dispatch an adhoc event.
Get usage metadata callback.
Calculate maximal marginal relevance.
Decorator to mark a function, a class, or a property as deprecated.
Context manager to suppress LangChainDeprecationWarning.
Display a standardized deprecation.
Unmute LangChain deprecation warnings.
Decorator indicating that parameter old of func is renamed to new.
Decorator to mark a function, a class, or a property as beta.
Context manager to suppress LangChainDeprecationWarning.
Display a standardized beta annotation.
Unmute LangChain beta warnings.
Return whether the caller at depth of this function is internal.
Get the path of the file as a relative path to the package directory.
Path of the file as a LangChain import exclude langchain top namespace.
Create a retry decorator for a given LLM and provided a list of error types.
Get prompts that are already cached.
Get prompts that are already cached. Async version.
Update the cache and get the LLM output.
Update the cache and get the LLM output. Async version.
Check whether a block contains multimodal data in OpenAI Chat Completions format.
Generate from a stream.
Async generate from a stream.
Get a GPT-2 tokenizer instance.
Pure-Python implementation of anext() for testing purposes.
DEPRECATED - Get the major version of Pydantic.
langchain-core defines the base abstractions for the LangChain ecosystem.
Print information about the system and langchain packages for debugging purposes.
Internal representation of a structured query language.
Interface for a rate limiter and an in-memory rate limiter.
Prompt values for language model prompts.
langchain-core version information and utilities.
Utilities for getting information about the runtime environment.
Chat Sessions are a collection of messages and function calls.
Schema definitions for representing agent actions, observations, and return values.
Custom exceptions for LangChain.
Optional caching layer for language models.
Chat message history stores a history of the message interactions in a chat.
Retriever class returns Document objects given a text query.
Global values and configuration that apply to all of LangChain.
Chat loaders.
Store implements the key-value stores and storage helpers.
Document loaders.
LangSmith document loader.
Schema for Blobs and Blob Loaders.
Abstract interface for document loader implementations.
Output classes.
Generation output schema.
LLMResult class.
RunInfo class.
Chat generation output classes.
Chat result schema.
Utility functions for LangChain.
Utilities for formatting strings.
Generic utility functions.
Utilities for environment variables.
Utilities for JSON Schema.
Asynchronous iterator utilities.
Methods for creating function specs in the style of OpenAI Functions.
Utilities for working with interactive environments.
String utilities.
Utilities for pydantic.
Utilities for JSON.
Utilities for image processing.
Handle chained inputs.
Usage utilities.
Utilities for working with iterators.
Utilities for working with HTML.
Adapted from https://github.com/noahmorrison/chevron.
UUID utility functions.
LangChain Runnable and the LangChain Expression Language (LCEL).
Graph used in Runnable objects.
Runnable that retries a Runnable if it fails.
Implementation of the RunnablePassthrough.
Runnable objects that can be dynamically configured.
Utility code for Runnable objects.
Runnable that selects which branch to run based on a condition.
Runnable that can fallback to other Runnable objects if it fails.
Runnable that manages chat message history for another Runnable.
Mermaid graph drawing utilities.
Draws DAG in ASCII.
Helper class to draw a state graph into a PNG file.
Runnable that routes to a set of Runnable objects.
Configuration utilities for Runnable objects.
Module contains typedefs that are used with Runnable objects.
Base classes and utilities for Runnable objects.
Embeddings.
Module contains a few fake embedding models for testing purposes.
Embeddings interface.
Code to help indexing data into a vectorstore.
In memory document index.
Module contains logic for indexing documents into vector stores.
Base classes for indexing.
Documents module for data retrieval and processing workflows.
Document compressor.
Document transformers.
Base classes for media and documents.
Load module helps with serialization and deserialization.
Load LangChain objects from JSON strings or objects.
Serialization mapping.
Serialize LangChain objects to JSON.
Serializable base class.
Messages are objects used in prompts and chat conversations.
AI message.
Module contains utility functions for working with messages.
Messages for tools.
Function Message.
Chat Message.
Human message.
Message responsible for deleting other messages.
Standard, multimodal content blocks for Large Language Model I/O.
System message.
Base message.
Derivations of standard content blocks from provider content.
Derivations of standard content blocks from Google (VertexAI) content.
Derivations of standard content blocks from OpenAI content.
Derivations of standard content blocks from Google (GenAI) content.
Derivations of standard content blocks from Amazon (Bedrock Converse) content.
Derivations of standard content blocks from Bedrock content.
Derivations of standard content blocks from Groq content.
Derivations of standard content blocks from Anthropic content.
Derivations of standard content blocks from LangChain v0 multimodal content.
OutputParser classes parse the output of an LLM call into structured data.
Parsers for list output.
Base classes for output parsers that can handle streaming input.
Format instructions.
String output parser.
Parsers for OpenAI functions output.
Output parsers using Pydantic.
Parse tools for OpenAI tools output.
Parser for JSON output.
Output parser for XML format.
Base parser for language model outputs.
Tools are classes that an Agent uses to interact with the world.
Tool that takes in function or coroutine directly.
Convert functions and runnables to tools.
Retriever tool.
Structured tool.
Utilities to render tools.
Base classes and utilities for LangChain tools.
Tracers are classes for tracing runs.
Context management for tracers.
Module implements a memory stream for communication between two co-routines.
Tracers that call listeners.
Internal tracer to power the event stream API.
A tracer that collects all nested runs in a list.
A tracer that runs evaluators over completed runs.
Tracers that print to the console.
A tracer implementation that records to LangChain endpoint.
Schemas for tracers.
Utilities for the root listener.
Tracer that streams run logs to a stream.
Base interfaces for tracing runs.
Example selectors.
Select examples based on length.
Example selector that selects examples based on SemanticSimilarity.
Interface for selecting examples to include in prompts.
A prompt is the input to the model.
Dictionary prompt template.
Load prompts.
Chat prompt template.
Structured prompt template for a language model.
Prompt schema definition.
BasePrompt schema definition.
Prompt template that contains few shot examples.
Message prompt templates.
Image prompt template for a multimodal model.
Prompt template that contains few shot examples.
Base class for prompt templates.
Callback handlers allow listening to events in LangChain.
Callback Handler streams to stdout on new llm token.
Callback handler that writes to a file.
Callback handler that prints to std out.
Run managers.
Callback Handler that tracks AIMessage.usage_metadata.
Base callback handler for LangChain.
Vector stores.
In-memory vector store.
Internal utilities for the in memory implementation of VectorStore.
A vector store stores embedded data and performs vector search.
Helper functions for deprecating parts of the LangChain API.
Helper functions for marking parts of the LangChain API as beta.
Core language model abstractions.
Fake chat models for testing purposes.
Base interface for traditional large language models (LLMs) to expose.
Fake LLMs for testing purposes.
Chat models for conversational AI.
Model profile types and utilities.
Base language models class.
A type representing the various ways a message can be represented.
A union of all defined Annotation types.
A union of all defined multimodal data ContentBlock types.
A union of all defined ContentBlock types and aliases.
Input to a language model.
Output from a language model.