LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraph.store.postgresbase
    Module●Since v1.0

    base

    Attributes

    attribute
    logger
    attribute
    MIGRATIONS: Sequence[str]
    attribute
    VECTOR_MIGRATIONS: Sequence[Migration]
    attribute
    C
    attribute
    PLACEHOLDER

    Functions

    function
    get_distance_operator

    Get the distance operator and score expression based on config.

    Classes

    class
    Migration

    A database migration with optional conditions and parameters.

    class
    PoolConfig

    Connection pool settings for PostgreSQL connections.

    Controls connection lifecycle and resource utilization:

    • Small pools (1-5) suit low-concurrency workloads
    • Larger pools handle concurrent requests but consume more resources
    • Setting max_size prevents resource exhaustion under load
    class
    ANNIndexConfig

    Configuration for vector index in PostgreSQL store.

    class
    HNSWConfig

    Configuration for HNSW (Hierarchical Navigable Small World) index.

    class
    IVFFlatConfig

    IVFFlat index divides vectors into lists, and then searches a subset of those lists that are closest to the query vector. It has faster build times and uses less memory than HNSW, but has lower query performance (in terms of speed-recall tradeoff).

    Three keys to achieving good recall are:

    1. Create the index after the table has some data
    2. Choose an appropriate number of lists - a good place to start is rows / 1000 for up to 1M rows and sqrt(rows) for over 1M rows
    3. When querying, specify an appropriate number of probes (higher is better for recall, lower is better for speed) - a good place to start is sqrt(lists)
    class
    PostgresIndexConfig

    Configuration for vector embeddings in PostgreSQL store with pgvector-specific options.

    Extends EmbeddingConfig with additional configuration for pgvector index and vector types.

    class
    BasePostgresStore
    class
    PostgresStore

    Postgres-backed store with optional vector search using pgvector.

    Examples

    Basic setup and usage:

    from langgraph.store.postgres import PostgresStore
    from psycopg import Connection
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    # Using direct connection
    with Connection.connect(conn_string) as conn:
        store = PostgresStore(conn)
        store.setup() # Run migrations. Done once
    
        # Store and retrieve data
        store.put(("users", "123"), "prefs", {"theme": "dark"})
        item = store.get(("users", "123"), "prefs")

    Or using the convenient from_conn_string helper:

    from langgraph.store.postgres import PostgresStore
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    with PostgresStore.from_conn_string(conn_string) as store:
        store.setup()
    
        # Store and retrieve data
        store.put(("users", "123"), "prefs", {"theme": "dark"})
        item = store.get(("users", "123"), "prefs")

    Vector search using LangChain embeddings:

    from langchain.embeddings import init_embeddings
    from langgraph.store.postgres import PostgresStore
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    with PostgresStore.from_conn_string(
        conn_string,
        index={
            "dims": 1536,
            "embed": init_embeddings("openai:text-embedding-3-small"),
            "fields": ["text"]  # specify which fields to embed. Default is the whole serialized value
        }
    ) as store:
        store.setup() # Do this once to run migrations
    
        # Store documents
        store.put(("docs",), "doc1", {"text": "Python tutorial"})
        store.put(("docs",), "doc2", {"text": "TypeScript guide"})
        store.put(("docs",), "doc2", {"text": "Other guide"}, index=False) # don't index
    
        # Search by similarity
        results = store.search(("docs",), query="programming guides", limit=2)
    class
    Row
    View source on GitHub