langchain.js
    Preparing search index...
    interface WatsonxInputEmbeddings {
        authenticator?: string;
        maxConcurrency?: number;
        maxRetries?: number;
        model: string;
        promptIndex?: number;
        returnOptions?: EmbeddingReturnOptions;
        serviceUrl: string;
        streaming?: boolean;
        truncateInputTokens?: number;
        version: string;
        watsonxCallbacks?: RequestCallbacks<any>;
    }

    Hierarchy (View Summary)

    Index

    Properties

    authenticator?: string
    maxConcurrency?: number
    maxRetries?: number
    model: string

    The id of the model to be used for this request. Please refer to the list of models.

    promptIndex?: number
    returnOptions?: EmbeddingReturnOptions

    The return options for text embeddings.

    serviceUrl: string
    streaming?: boolean
    truncateInputTokens?: number

    Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the end of the input (on the right), so the start of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length.

    version: string
    watsonxCallbacks?: RequestCallbacks<any>