langchain.js
    Preparing search index...

    Interface ChatBedrockConverseInput

    Inputs for ChatBedrockConverse.

    interface ChatBedrockConverseInput {
        additionalModelRequestFields?: DocumentType;
        client?: BedrockRuntimeClient;
        clientOptions?: BedrockRuntimeClientConfig;
        credentials?: CredentialType;
        durationSeconds?: number;
        endpointHost?: string;
        guardrailConfig?: GuardrailConfiguration;
        maxTokens?: number;
        model?: string;
        performanceConfig?: PerformanceConfiguration;
        policy?: string;
        policyArns?: {}[];
        providerId?: string;
        region?: string;
        roleArn?: string;
        streaming?: boolean;
        streamUsage?: boolean;
        supportsToolChoiceValues?: ("auto" | "any" | "tool")[];
        temperature?: number;
        topP?: number;
    }

    Hierarchy (View Summary)

    Implemented by

    Index

    Properties

    additionalModelRequestFields?: DocumentType

    Additional inference parameters that the model supports, beyond the base set of inference parameters that the Converse API supports in the inferenceConfig field. For more information, see the model parameters link below.

    client?: BedrockRuntimeClient

    The BedrockRuntimeClient to use. It gives ability to override the default client with a custom one, allowing you to pass requestHandler {NodeHttpHandler} parameter in case it is not provided here.

    clientOptions?: BedrockRuntimeClientConfig

    Overrideable configuration options for the BedrockRuntimeClient. Allows customization of client configuration such as requestHandler, etc. Will be ignored if 'client' is provided.

    credentials?: CredentialType

    AWS Credentials. If no credentials are provided, the default credentials from @aws-sdk/credential-provider-node will be used.

    durationSeconds?: number
    endpointHost?: string

    Override the default endpoint hostname.

    guardrailConfig?: GuardrailConfiguration

    Configuration information for a guardrail that you want to use in the request.

    maxTokens?: number

    Max tokens.

    model?: string

    Model to use. For example, "anthropic.claude-3-haiku-20240307-v1:0", this is equivalent to the modelId property in the list-foundation-models api. See the below link for a full list of models.

    anthropic.claude-3-haiku-20240307-v1:0
    
    performanceConfig?: PerformanceConfiguration
    policy?: string
    policyArns?: {}[]
    providerId?: string
    region?: string

    The AWS region e.g. us-west-2. Fallback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.

    roleArn?: string
    streaming?: boolean

    Whether or not to stream responses

    streamUsage?: boolean

    Whether or not to include usage data, like token counts in the streamed response chunks. Passing as a call option will take precedence over the class-level setting.

    true
    
    supportsToolChoiceValues?: ("auto" | "any" | "tool")[]

    Which types of tool_choice values the model supports.

    Inferred if not specified. Inferred as ['auto', 'any', 'tool'] if a 'claude-3' model is used, ['auto', 'any'] if a 'mistral-large' model is used, empty otherwise.

    temperature?: number

    Temperature.

    topP?: number

    The percentage of most-likely candidates that the model considers for the next token. For example, if you choose a value of 0.8 for topP, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence. The default value is the default value for the model that you are using. For more information, see the inference parameters for foundation models link below.