langchain.js
    Preparing search index...

    Interface ChatMistralAIInput

    Input to chat model class.

    interface ChatMistralAIInput {
        apiKey?: string;
        beforeRequestHooks?: BeforeRequestHook[];
        endpoint?: string;
        frequencyPenalty?: number;
        httpClient?: HTTPClient;
        maxTokens?: number;
        model?: string;
        modelName?: string;
        numCompletions?: number;
        presencePenalty?: number;
        randomSeed?: number;
        requestErrorHooks?: RequestErrorHook[];
        responseHooks?: ResponseHook[];
        safeMode?: boolean;
        safePrompt?: boolean;
        seed?: number;
        serverURL?: string;
        streaming?: boolean;
        streamUsage?: boolean;
        temperature?: number;
        topP?: number;
    }

    Hierarchy (View Summary)

    Implemented by

    Index

    Properties

    apiKey?: string

    The API key to use.

    {process.env.MISTRAL_API_KEY}
    
    beforeRequestHooks?: BeforeRequestHook[]

    A list of custom hooks that must follow (req: Request) => Awaitable<Request | void> They are automatically added when a ChatMistralAI instance is created.

    endpoint?: string

    Override the default server URL used by the Mistral SDK.

    use serverURL instead

    frequencyPenalty?: number

    Penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition.

    httpClient?: HTTPClient

    Custom HTTP client to manage API requests. Allows users to add custom fetch implementations, hooks, as well as error and response processing.

    maxTokens?: number

    The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

    model?: string

    The name of the model to use.

    {"mistral-small-latest"}
    
    modelName?: string

    The name of the model to use. Alias for model

    Use model instead.

    {"mistral-small-latest"}
    
    numCompletions?: number

    Number of completions to return for each request, input tokens are only billed once.

    presencePenalty?: number

    Determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative.

    randomSeed?: number

    The seed to use for random sampling. If set, different calls will generate deterministic results. Alias for seed

    requestErrorHooks?: RequestErrorHook[]

    A list of custom hooks that must follow (err: unknown, req: Request) => Awaitable. They are automatically added when a ChatMistralAI instance is created.

    responseHooks?: ResponseHook[]

    A list of custom hooks that must follow (res: Response, req: Request) => Awaitable. They are automatically added when a ChatMistralAI instance is created.

    safeMode?: boolean

    Whether to inject a safety prompt before all conversations.

    {false}
    

    use safePrompt instead

    safePrompt?: boolean

    Whether to inject a safety prompt before all conversations.

    {false}
    
    seed?: number

    The seed to use for random sampling. If set, different calls will generate deterministic results.

    serverURL?: string

    Override the default server URL used by the Mistral SDK.

    streaming?: boolean

    Whether or not to stream the response.

    {false}
    
    streamUsage?: boolean

    Whether or not to include token usage in the stream.

    {true}
    
    temperature?: number

    What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

    {0.7}
    
    topP?: number

    Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Should be between 0 and 1.

    {1}