LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coredocument_loaderslangsmith
    Module●Since v0.2

    langsmith

    LangSmith document loader.

    Functions

    function
    pydantic_to_dict

    Convert any Pydantic model to dict, compatible with both v1 and v2.

    Classes

    class
    BaseLoader

    Interface for document loader.

    Implementations should implement the lazy-loading method using generators to avoid loading all documents into memory at once.

    load is provided just for user convenience and should not be overridden.

    class
    Document

    Class for storing a piece of text and associated metadata.

    Note

    Document is for retrieval workflows, not chat I/O. For sending text to an LLM in a conversation, use message types from langchain.messages.

    class
    LangSmithLoader

    Load LangSmith Dataset examples as Document objects.

    Loads the example inputs as the Document page content and places the entire example into the Document metadata. This allows you to easily create few-shot example retrievers from the loaded documents.

    Lazy loading
    from langchain_core.document_loaders import LangSmithLoader
    
    loader = LangSmithLoader(dataset_id="...", limit=100)
    docs = []
    for doc in loader.lazy_load():
        docs.append(doc)
    # -> [Document("...", metadata={"inputs": {...}, "outputs": {...}, ...}), ...]
    View source on GitHub