langchain.js
    Preparing search index...

    Interface AnthropicInput

    Input to AnthropicChat class.

    interface AnthropicInput {
        anthropicApiKey?: string;
        anthropicApiUrl?: string;
        apiKey?: string;
        betas?: AnthropicBeta[];
        clientOptions?: ClientOptions;
        contextManagement?: BetaContextManagementConfig;
        createClient?: (options: ClientOptions) => any;
        inferenceGeo?: string;
        invocationKwargs?: Kwargs;
        maxTokens?: number;
        model?: AnthropicMessagesModelId;
        modelName?: AnthropicMessagesModelId;
        outputConfig?: OutputConfig;
        stopSequences?: string[];
        streaming?: boolean;
        streamUsage?: boolean;
        temperature?: number;
        thinking?: ThinkingConfigParam;
        topK?: number;
        topP?: null | number;
    }

    Implemented by

    Index

    Properties

    anthropicApiKey?: string

    Anthropic API key

    anthropicApiUrl?: string

    Anthropic API URL

    apiKey?: string

    Anthropic API key

    betas?: AnthropicBeta[]

    Optional array of beta features to enable for the Anthropic API. Beta features are experimental capabilities that may change or be removed. See https://docs.claude.com/en/api/beta-headers for available beta features.

    clientOptions?: ClientOptions

    Overridable Anthropic ClientOptions

    contextManagement?: BetaContextManagementConfig
    createClient?: (options: ClientOptions) => any

    Optional method that returns an initialized underlying Anthropic client. Useful for accessing Anthropic models hosted on other cloud services such as Google Vertex.

    inferenceGeo?: string

    Specifies the geographic region for inference processing. US-only inference is available at 1.1x pricing for models released after February 1, 2026.

    invocationKwargs?: Kwargs

    Holds any additional parameters that are valid to pass to anthropic.messages that are not explicitly specified on this class.

    maxTokens?: number

    A maximum number of tokens to generate before stopping.

    Model name to use

    Use "model" instead

    outputConfig?: OutputConfig

    Configuration options for the model's output, such as effort level and output format. The effort parameter controls how many tokens Claude uses when responding, trading off between response thoroughness and token efficiency.

    Effort levels: "low", "medium", "high" (default), "max" (Opus 4.6 only).

    const model = new ChatAnthropic({
    model: "claude-opus-4-6",
    thinking: { type: "adaptive" },
    outputConfig: { effort: "medium" },
    });
    stopSequences?: string[]

    A list of strings upon which to stop generating. You probably want ["\n\nHuman:"], as that's the cue for the next turn in the dialog agent.

    streaming?: boolean

    Whether to stream the results or not

    streamUsage?: boolean

    Whether or not to include token usage data in streamed chunks.

    true
    
    temperature?: number

    Amount of randomness injected into the response. Ranges from 0 to 1. Use temperature closer to 0 for analytical / multiple choice, and temperature closer to 1 for creative and generative tasks.

    thinking?: ThinkingConfigParam

    Options for extended thinking.

    topK?: number

    Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses.

    topP?: null | number

    Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Note that you should either alter temperature or top_p, but not both.