langchain.js
    Preparing search index...

    Interface OllamaEmbeddingsParams

    Interface for OllamaEmbeddings parameters. Extends EmbeddingsParams and defines additional parameters specific to the OllamaEmbeddings class.

    interface OllamaEmbeddingsParams {
        baseUrl?: string;
        fetch?: {
            (input: RequestInfo | URL, init?: RequestInit): Promise<Response>;
            (input: string | Request | URL, init?: RequestInit): Promise<Response>;
        };
        headers?: Headers
        | Record<string, string>;
        keepAlive?: string | number;
        model?: string;
        requestOptions?: OllamaCamelCaseOptions & Partial<Options>;
        truncate?: boolean;
    }

    Hierarchy (View Summary)

    Index

    Properties

    baseUrl?: string

    Base URL of the Ollama server

    "http://localhost:11434"
    
    fetch?: {
        (input: RequestInfo | URL, init?: RequestInit): Promise<Response>;
        (input: string | Request | URL, init?: RequestInit): Promise<Response>;
    }

    The fetch function to use.

    Type Declaration

      • (input: RequestInfo | URL, init?: RequestInit): Promise<Response>
      • Parameters

        • input: RequestInfo | URL
        • Optionalinit: RequestInit

        Returns Promise<Response>

      • (input: string | Request | URL, init?: RequestInit): Promise<Response>
      • Parameters

        • input: string | Request | URL
        • Optionalinit: RequestInit

        Returns Promise<Response>

    fetch
    
    headers?: Headers | Record<string, string>

    Optional HTTP Headers to include in the request.

    keepAlive?: string | number

    Defaults to "5m"

    model?: string

    The Ollama model to use for embeddings.

    "mxbai-embed-large"
    
    requestOptions?: OllamaCamelCaseOptions & Partial<Options>

    Advanced Ollama API request parameters in camelCase, see https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values for details of the available parameters.

    truncate?: boolean

    Whether or not to truncate the input text to fit inside the model's context window.

    false