langchain-core defines the base abstractions for the LangChain ecosystem.
The interfaces for core components like chat models, LLMs, vector stores, retrievers, and more are defined here. The universal invocation protocol (Runnables) along with a syntax for combining components are also defined here.
No third-party integrations are defined here. The dependencies are kept purposefully very lightweight.
Print information about the system and langchain packages for debugging purposes.
Internal representation of a structured query language.
Interface for a rate limiter and an in-memory rate limiter.
Prompt values for language model prompts.
Prompt values are used to represent different pieces of prompts. They can be used to represent text, images, or chat message pieces.
langchain-core version information and utilities.
Utilities for getting information about the runtime environment.
Chat Sessions are a collection of messages and function calls.
Schema definitions for representing agent actions, observations, and return values.
The schema definitions are provided for backwards compatibility.
New agents should be built using the
langchain library, which provides a
simpler and more flexible way to define agents.
See docs on building agents.
Agents use language models to choose a sequence of actions to take.
A basic agent works in the following manner:
The schemas for the agents themselves are defined in langchain.agents.agent.
Custom exceptions for LangChain.
Optional caching layer for language models.
Distinct from provider-based prompt caching.
This is a beta feature. Please be wary of deploying experimental code to production unless you've taken appropriate precautions.
A cache is useful for two reasons:
Chat message history stores a history of the message interactions in a chat.
Retriever class returns Document objects given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.
Global values and configuration that apply to all of LangChain.
Chat loaders.
Store implements the key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Document loaders.
Output classes.
Used to represent the output of a language model call and the output of a chat.
The top container for information is the LLMResult object. LLMResult is used by both
chat models and LLMs. This object contains the output of the language model and any
additional information that the model provider wants to return.
When invoking models via the standard runnable methods (e.g. invoke, batch, etc.):
AIMessage objects.In addition, users can access the raw output of either LLMs or chat models via
callbacks. The on_chat_model_end and on_llm_end callbacks will return an LLMResult
object containing the generated outputs and any additional information returned by the
model provider.
In general, if information is already available in the AIMessage object, it is
recommended to access it from there rather than from the LLMResult object.
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnable objects inherently suppor
synchronous asynchronous, batch, and streaming operations.
Support for async allows servers hosting LCEL based programs to scale bette for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they're being generated, allows for creating more responsive UX.
This module contains schema and implementation of LangChain Runnable object
primitives.
Embeddings.
Code to help indexing data into a vectorstore.
This package contains helper logic to help deal with indexing data into
a VectorStore while avoiding duplicated content and over-writing content
if it's unchanged.
Documents module for data retrieval and processing workflows.
This module provides core abstractions for handling data in retrieval-augmented generation (RAG) pipelines, vector stores, and document processing workflows.
This module is distinct from langchain_core.messages.content, which provides
multimodal content blocks for LLM chat I/O (text, images, audio, etc. within
messages).
Key distinction:
Documents (this module): For data retrieval and processing workflows
Content Blocks (messages.content): For LLM conversational I/O
ImageContentBlock)While both can represent similar data types (text, files), they serve different architectural purposes in LangChain applications.
Load module helps with serialization and deserialization.
Messages are objects used in prompts and chat conversations.
OutputParser classes parse the output of an LLM call into structured data.
Output parsers emerged as an early solution to the challenge of obtaining structured output from LLMs.
Today, most LLMs support structured output natively. In such cases, using output parsers may be unnecessary, and you should leverage the model's built-in capabilities for structured output. Refer to the documentation of your chosen model for guidance on how to achieve structured output directly.
Output parsers remain valuable when working with models that do not support structured output natively, or when you require additional processing or validation of the model's output beyond its inherent capabilities.
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the righ tool for the job.
Tracers are classes for tracing runs.
Example selectors.
Example selector implements logic for selecting examples to include them in prompts. This allows us to select examples that are most relevant to the input.
A prompt is the input to the model.
Prompt is often constructed from multiple components and prompt values. Prompt classes and functions make constructing and working with prompts easy.
Callback handlers allow listening to events in LangChain.
Vector stores.
Core language model abstractions.
LangChain has two main classes to work with language models: chat models and "old-fashioned" LLMs (string-in, string-out).
Chat models
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
The key abstraction for chat models is
BaseChatModel. Implementations should
inherit from this class.
See existing chat model integrations.
LLMs (legacy)
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are chat models).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as chat models. When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.