class BedrockInternal method that handles batching and configuration for a runnable
Create a unique cache key for a specific call to a specific language model.
Get the identifying parameters of the LLM.
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Generates chat based on the input messages.
Generates a prompt based on the input prompt values.
Get the number of tokens in the content.
Get the parameters used to invoke the model
Invokes the tool with the provided input and configuration.
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Return a json-like object representing this chain.
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
Load a chain from a json-like object describing it.
The name of the serializable. Override to provide an alias or
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
AWS Credentials.
If no credentials are provided, the default credentials from @aws-sdk/credential-provider-node will be used.
Override the default endpoint hostname.
Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.
Region for the Alibaba Tongyi API endpoint.
Available regions:
Whether to stream the results or not. Defaults to false.
Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.
Whether to print out response text.
A type of Large Language Model (LLM) that interacts with the Bedrock
service. It extends the base LLM class and implements the
BaseBedrockInput interface. The class is designed to authenticate and
interact with the Bedrock service, which is a part of Amazon Web
Services (AWS). It uses AWS credentials for authentication and can be
configured with various parameters such as the model to use, the AWS
region, and the maximum number of tokens to generate.