langchain.js
    Preparing search index...

    Interface AnthropicInput

    Input to AnthropicChat class.

    interface AnthropicInput {
        anthropicApiKey?: string;
        anthropicApiUrl?: string;
        apiKey?: string;
        clientOptions?: ClientOptions;
        createClient?: (options: ClientOptions) => any;
        invocationKwargs?: Kwargs;
        maxTokens?: number;
        maxTokensToSample?: number;
        model?: AnthropicMessagesModelId;
        modelName?: AnthropicMessagesModelId;
        stopSequences?: string[];
        streaming?: boolean;
        streamUsage?: boolean;
        temperature?: null | number;
        thinking?: ThinkingConfigParam;
        topK?: number;
        topP?: null | number;
    }

    Implemented by

    Index

    Properties

    anthropicApiKey?: string

    Anthropic API key

    anthropicApiUrl?: string

    Anthropic API URL

    apiKey?: string

    Anthropic API key

    clientOptions?: ClientOptions

    Overridable Anthropic ClientOptions

    createClient?: (options: ClientOptions) => any

    Optional method that returns an initialized underlying Anthropic client. Useful for accessing Anthropic models hosted on other cloud services such as Google Vertex.

    invocationKwargs?: Kwargs

    Holds any additional parameters that are valid to pass to anthropic.messages that are not explicitly specified on this class.

    maxTokens?: number

    A maximum number of tokens to generate before stopping.

    maxTokensToSample?: number

    A maximum number of tokens to generate before stopping.

    Use "maxTokens" instead.

    Model name to use

    Use "model" instead

    stopSequences?: string[]

    A list of strings upon which to stop generating. You probably want ["\n\nHuman:"], as that's the cue for the next turn in the dialog agent.

    streaming?: boolean

    Whether to stream the results or not

    streamUsage?: boolean

    Whether or not to include token usage data in streamed chunks.

    true
    
    temperature?: null | number

    Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. To not set this field, pass null. If undefined is passed, the default (1) will be used.

    thinking?: ThinkingConfigParam

    Options for extended thinking.

    topK?: number

    Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Defaults to -1, which disables it.

    topP?: null | number

    Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both.

    To not set this field, pass null. If undefined is passed, the default (-1) will be used.

    For Opus 4.1, this defaults to null.