langchain.js
    Preparing search index...
    interface BedrockChatFields {
        applicationInferenceProfile?: string;
        awsAccessKeyId?: string;
        awsSecretAccessKey?: string;
        awsSessionToken?: string;
        credentials?: CredentialType;
        endpointHost?: string;
        endpointUrl?: string;
        fetchFn?: {
            (input: URL | RequestInfo, init?: RequestInit): Promise<Response>;
            (input: string | URL | Request, init?: RequestInit): Promise<Response>;
        };
        guardrailConfig?: {
            streamProcessingMode: "SYNCHRONOUS"
            | "ASYNCHRONOUS";
            tagSuffix: string;
        };
        guardrailIdentifier?: string;
        guardrailVersion?: string;
        maxTokens?: number;
        model?: string;
        modelKwargs?: Record<string, unknown>;
        region?: string;
        stopSequences?: string[];
        streaming?: boolean;
        temperature?: number;
        trace?: "ENABLED" | "DISABLED";
    }

    Hierarchy (View Summary)

    Index

    Properties

    applicationInferenceProfile?: string

    Optional URL Encoded overide for URL model parameter in fetch. Necessary for invoking an Application Inference Profile. For example, "arn%3Aaws%3Abedrock%3Aus-east-1%3A1234567890%3Aapplication-inference-profile%2Fabcdefghi", will override this.model in final /invoke URL call. Must still provide model as normal modelId to benefit from all the metadata.

    awsAccessKeyId?: string
    awsSecretAccessKey?: string
    awsSessionToken?: string
    credentials?: CredentialType

    AWS Credentials. If no credentials are provided, the default credentials from @aws-sdk/credential-provider-node will be used.

    endpointHost?: string

    Override the default endpoint hostname.

    endpointUrl?: string

    Use endpointHost instead Override the default endpoint url.

    fetchFn?: {
        (input: URL | RequestInfo, init?: RequestInit): Promise<Response>;
        (input: string | URL | Request, init?: RequestInit): Promise<Response>;
    }

    A custom fetch function for low-level access to AWS API. Defaults to fetch().

    Type Declaration

      • (input: URL | RequestInfo, init?: RequestInit): Promise<Response>
      • Parameters

        • input: URL | RequestInfo
        • Optionalinit: RequestInit

        Returns Promise<Response>

      • (input: string | URL | Request, init?: RequestInit): Promise<Response>
      • Parameters

        • input: string | URL | Request
        • Optionalinit: RequestInit

        Returns Promise<Response>

    guardrailConfig?: {
        streamProcessingMode: "SYNCHRONOUS" | "ASYNCHRONOUS";
        tagSuffix: string;
    }

    Required when Guardrail is in use.

    guardrailIdentifier?: string

    Identifier for the guardrail configuration.

    guardrailVersion?: string

    Version for the guardrail configuration.

    maxTokens?: number

    Max tokens.

    model?: string

    Model to use. For example, "amazon.titan-tg1-large", this is equivalent to the modelId property in the list-foundation-models api.

    modelKwargs?: Record<string, unknown>

    Additional kwargs to pass to the model.

    region?: string

    The AWS region e.g. us-west-2. Fallback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.

    stopSequences?: string[]

    Optional additional stop sequences to pass to the model. Currently only supported for Anthropic and AI21.

    Use .withConfig({ "stop": [...] }) instead

    streaming?: boolean

    Whether or not to stream responses

    temperature?: number

    Temperature.

    trace?: "ENABLED" | "DISABLED"

    Trace settings for the Bedrock Guardrails.