# ChatBedrockConverse

> **Class** in `langchain_aws`

📖 [View in docs](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse)

Bedrock chat model integration built on the Bedrock converse API.

This implementation will eventually replace the existing ChatBedrock implementation
once the Bedrock converse API has feature parity with older Bedrock API.
Specifically the converse API does not yet support custom Bedrock models.

## Signature

```python
ChatBedrockConverse()
```

## Description

**Setup:**

To use Amazon Bedrock make sure you've gone through all the steps described
here: https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html

Once that's completed, install the LangChain integration:

```bash
pip install -U langchain-aws
```

Key init args — completion params:
    model: str
        Name of BedrockConverse model to use.
    temperature: float
        Sampling temperature.
    max_tokens: Optional[int]
        Max number of tokens to generate.

Key init args — client params:
    region_name: Optional[str]
        AWS region to use, e.g. 'us-west-2'.
    base_url: Optional[str]
        Bedrock endpoint to use. Needed if you don't want to default to us-east-
        1 endpoint.
    credentials_profile_name: Optional[str]
        The name of the profile in the ~/.aws/credentials or ~/.aws/config files.

See full list of supported init args and their descriptions in the params section.

**Instantiate:**

```python
from langchain_aws import ChatBedrockConverse

model = ChatBedrockConverse(
    model="anthropic.claude-3-sonnet-20240229-v1:0",
    temperature=0,
    max_tokens=None,
    # other params...
)
```

**Invoke:**

```python
messages = [
    ("system", "You are a helpful translator. Translate the user sentence to French."),
    ("human", "I love programming."),
]
model.invoke(messages)
```

```python
AIMessage(content=[{'type': 'text', 'text': "J'aime la programmation."}], response_metadata={'ResponseMetadata': {'RequestId': '9ef1e313-a4c1-4f79-b631-171f658d3c0e', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Sat, 15 Jun 2024 01:19:24 GMT', 'content-type': 'application/json', 'content-length': '205', 'connection': 'keep-alive', 'x-amzn-requestid': '9ef1e313-a4c1-4f79-b631-171f658d3c0e'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': 609}}, id='run-754e152b-2b41-4784-9538-d40d71a5c3bc-0', usage_metadata={'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36})
```

**Stream:**

```python
for chunk in model.stream(messages):
    print(chunk)
```

```python
AIMessageChunk(content=[], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'type': 'text', 'text': 'J', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': "'", 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': 'a', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': 'ime', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': ' la', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': ' programm', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': 'ation', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'text': '.', 'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[{'index': 0}], id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[], response_metadata={'stopReason': 'end_turn'}, id='run-da3c2606-4792-440a-ac66-72e0d1f6d117')
AIMessageChunk(content=[], response_metadata={'metrics': {'latencyMs': 581}}, id='run-da3c2606-4792-440a-ac66-72e0d1f6d117', usage_metadata={'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36})
```

```python
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full
```

```python
AIMessageChunk(content=[{'type': 'text', 'text': "J'aime la programmation.", 'index': 0}], response_metadata={'stopReason': 'end_turn', 'metrics': {'latencyMs': 554}}, id='run-56a5a5e0-de86-412b-9835-624652dc3539', usage_metadata={'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36})
```

**Tool calling:**

```python
from pydantic import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

model_with_tools = model.bind_tools([GetWeather, GetPopulation])
ai_msg = model_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
```

```python
[{'name': 'GetWeather',
  'args': {'location': 'Los Angeles, CA'},
  'id': 'tooluse_Mspi2igUTQygp-xbX6XGVw'},
 {'name': 'GetWeather',
  'args': {'location': 'New York, NY'},
  'id': 'tooluse_tOPHiDhvR2m0xF5_5tyqWg'},
 {'name': 'GetPopulation',
  'args': {'location': 'Los Angeles, CA'},
  'id': 'tooluse__gcY_klbSC-GqB-bF_pxNg'},
 {'name': 'GetPopulation',
  'args': {'location': 'New York, NY'},
  'id': 'tooluse_-1HSoGX0TQCSaIg7cdFy8Q'}]
```

See `ChatBedrockConverse.bind_tools()` method for more.

**Structured output:**

```python
from typing import Optional

from pydantic import BaseModel, Field

class Joke(BaseModel):
    '''Joke to tell user.'''

    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")

structured_model = model.with_structured_output(Joke)
structured_model.invoke("Tell me a joke about cats")
```

```python
Joke(setup='What do you call a cat that gets all dressed up?', punchline='A purrfessional!', rating=7)
```

Native JSON schema output (requires supported models, e.g. Claude 4.5+):
```python
structured_model = model.with_structured_output(Joke, method="json_schema")
structured_model.invoke("Tell me a joke about cats")
```

```python
Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=6)
```

See `ChatBedrockConverse.with_structured_output()` for more.

**Extended thinking:**

Some models, such as Claude 3.7 Sonnet, support an extended thinking
feature that outputs the step-by-step reasoning process that led to an
answer.

To use it, specify the `thinking` parameter when initializing
`ChatBedrockConverse` as shown below.

You will need to specify a token budget to use this feature. See usage example:

```python
from langchain_aws import ChatBedrockConverse

thinking_params= {
    "thinking": {
        "type": "enabled",
        "budget_tokens": 2000
    }
}

model = ChatBedrockConverse(
    model="us.anthropic.claude-sonnet-4-5-20250929-v1:0",
    max_tokens=5000,
    region_name="us-west-2",
    additional_model_request_fields=thinking_params,
)

response = model.invoke("What is the cube root of 50.653?")
print(response.content)
```

```python
[
    {'type': 'reasoning_content', 'reasoning_content': {'type': 'text', 'text': 'I need to calculate the cube root of... ', 'signature': '...'}},
    {'type': 'text', 'text': 'The cube root of 50.653 is...'}
]
```

**Image input:**

```python
import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
    content=[
        {"type": "text", "text": "describe the weather in this image"},
        {
            "type": "image",
            "source": {"type": "base64", "media_type": "image/jpeg", "data": image_data},
        },
    ],
)
ai_msg = model.invoke([message])
ai_msg.content
```

```python
[{'type': 'text',
  'text': 'The image depicts a sunny day with a partly cloudy sky. The sky is a brilliant blue color with scattered white clouds drifting across. The lighting and cloud patterns suggest pleasant, mild weather conditions. The scene shows an open grassy field or meadow, indicating warm temperatures conducive for vegetation growth. Overall, the weather portrayed in this scenic outdoor image appears to be sunny with some clouds, likely representing a nice, comfortable day.'}]
```

**Token usage:**

```python
ai_msg = model.invoke(messages)
ai_msg.usage_metadata
```

```python
{'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36}
```

**Response metadata:**

```python
ai_msg = model.invoke(messages)
ai_msg.response_metadata
```

```python
{'ResponseMetadata': {'RequestId': '776a2a26-5946-45ae-859e-82dc5f12017c',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'date': 'Mon, 17 Jun 2024 01:37:05 GMT',
   'content-type': 'application/json',
   'content-length': '206',
   'connection': 'keep-alive',
   'x-amzn-requestid': '776a2a26-5946-45ae-859e-82dc5f12017c'},
  'RetryAttempts': 0},
 'stopReason': 'end_turn',
 'metrics': {'latencyMs': 1290}}
```

## Extends

- `BaseChatModel`

## Properties

- `client`
- `bedrock_client`
- `model_id`
- `base_model_id`
- `system`
- `max_tokens`
- `stop_sequences`
- `temperature`
- `top_p`
- `region_name`
- `credentials_profile_name`
- `aws_access_key_id`
- `aws_secret_access_key`
- `aws_session_token`
- `bedrock_api_key`
- `provider`
- `endpoint_url`
- `default_headers`
- `config`
- `timeout`
- `max_retries`
- `guardrail_config`
- `additional_model_request_fields`
- `additional_model_response_field_paths`
- `supports_tool_choice_values`
- `performance_config`
- `service_tier`
- `output_config`
- `request_metadata`
- `guard_last_turn_only`
- `raw_blocks`
- `model_config`
- `lc_secrets`

## Methods

- [`create_cache_point()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/create_cache_point)
- [`build_extra()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/build_extra)
- [`set_disable_streaming()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/set_disable_streaming)
- [`validate_environment()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/validate_environment)
- [`bind_tools()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/bind_tools)
- [`with_structured_output()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/with_structured_output)
- [`is_lc_serializable()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/is_lc_serializable)
- [`get_lc_namespace()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/get_lc_namespace)
- [`get_num_tokens_from_messages()`](https://reference.langchain.com/python/langchain-aws/chat_models/bedrock_converse/ChatBedrockConverse/get_num_tokens_from_messages)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-aws/blob/10d18256d46953e5fc8dca313a2c41eee29c2a80/libs/aws/langchain_aws/chat_models/bedrock_converse.py#L139)