# ChatModelIntegrationTests

> **Class** in `langchain_tests`

📖 [View in docs](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests)

Base class for chat model integration tests.

Test subclasses must implement the `chat_model_class` and
`chat_model_params` properties to specify what model to test and its
initialization parameters.

```python
from typing import Type

from langchain_tests.integration_tests import ChatModelIntegrationTests
from my_package.chat_models import MyChatModel

class TestMyChatModelIntegration(ChatModelIntegrationTests):
    @property
    def chat_model_class(self) -> Type[MyChatModel]:
        # Return the chat model class to test here
        return MyChatModel

    @property
    def chat_model_params(self) -> dict:
        # Return initialization parameters for the model.
        return {"model": "model-001", "temperature": 0}
```

!!! note
    API references for individual test methods include troubleshooting tips.

Test subclasses **must** implement the following two properties:

`chat_model_class`: The chat model class to test, e.g., `ChatParrotLink`.

```python
@property
def chat_model_class(self) -> Type[ChatParrotLink]:
    return ChatParrotLink
```

`chat_model_params`: Initialization parameters for the chat model.

```python
@property
def chat_model_params(self) -> dict:
    return {"model": "bird-brain-001", "temperature": 0}
```

In addition, test subclasses can control what features are tested (such as tool
calling or multi-modality) by selectively overriding the following properties.

Expand to see details:

???+ info "`has_tool_calling`"

    Boolean property indicating whether the chat model supports tool calling.

    By default, this is determined by whether the chat model's `bind_tools` method
    is overridden. It typically does not need to be overridden on the test class.

    ```python
    @property
    def has_tool_calling(self) -> bool:
        return True
    ```

??? info "`has_tool_choice`"

    Boolean property indicating whether the chat model supports forcing tool
    calling via a `tool_choice` parameter.

    By default, this is determined by whether the parameter is included in the
    signature for the corresponding `bind_tools` method.

    If `True`, the minimum requirement for this feature is that
    `tool_choice='any'` will force a tool call, and `tool_choice=<tool name>`
    will force a call to a specific tool.

    ```python
    @property
    def has_tool_choice(self) -> bool:
        return False
    ```

??? info "`has_structured_output`"

    Boolean property indicating whether the chat model supports structured
    output.

    By default, this is determined by whether the chat model's
    `with_structured_output` method is overridden. If the base implementation is
    intended to be used, this method should be overridden.

    See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).

    ```python
    @property
    def has_structured_output(self) -> bool:
        return True
    ```

??? info "`structured_output_kwargs`"

    Dict property specifying additional kwargs to pass to
    `with_structured_output()` when running structured output tests.

    Override this to customize how your model generates structured output.

    The most common use case is specifying the `method` parameter:

    - `'function_calling'`: Uses tool/function calling to enforce the schema.
    - `'json_mode'`: Uses the model's JSON mode.
    - `'json_schema'`: Uses native JSON schema support (e.g., OpenAI's structured
        outputs).

    ```python
    @property
    def structured_output_kwargs(self) -> dict:
        return {"method": "json_schema"}
    ```

??? info "`supports_json_mode`"

    Boolean property indicating whether the chat model supports
    `method='json_mode'` in `with_structured_output`.

    Defaults to `False`.

    JSON mode constrains the model to output valid JSON without enforcing
    a specific schema (unlike `'function_calling'` or `'json_schema'` methods).

    When using JSON mode, you must prompt the model to output JSON in your
    message.

    !!! example

        ```python
        structured_llm = llm.with_structured_output(MySchema, method="json_mode")
        structured_llm.invoke("... Return the result as JSON.")
        ```

    See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).

    ```python
    @property
    def supports_json_mode(self) -> bool:
        return True
    ```

??? info "`supports_image_inputs`"

    Boolean property indicating whether the chat model supports image inputs.

    Defaults to `False`.

    If set to `True`, the chat model will be tested by inputting an
    `ImageContentBlock` with the shape:

    ```python
    {
        "type": "image",
        "base64": "<base64 image data>",
        "mime_type": "image/jpeg",  # or appropriate MIME type
    }
    ```

    In addition to OpenAI-style content blocks:

    ```python
    {
        "type": "image_url",
        "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
    }
    ```

    See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).

    ```python
    @property
    def supports_image_inputs(self) -> bool:
        return True
    ```

??? info "`supports_image_urls`"

    Boolean property indicating whether the chat model supports image inputs from
    URLs.

    Defaults to `False`.

    If set to `True`, the chat model will be tested using content blocks of the
    form

    ```python
    {
        "type": "image",
        "url": "https://...",
    }
    ```

    See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).

    ```python
    @property
    def supports_image_urls(self) -> bool:
        return True
    ```

??? info "`supports_image_tool_message`"

    Boolean property indicating whether the chat model supports a `ToolMessage`
    that includes image content, e.g. in the OpenAI Chat Completions format.

    Defaults to `False`.

    ```python
    ToolMessage(
        content=[
            {
                "type": "image_url",
                "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
            },
        ],
        tool_call_id="1",
        name="random_image",
    )
    ```

    ...as well as the LangChain `ImageContentBlock` format:

    ```python
    ToolMessage(
        content=[
            {
                "type": "image",
                "base64": image_data,
                "mime_type": "image/jpeg",
            },
        ],
        tool_call_id="1",
        name="random_image",
    )
    ```

    If set to `True`, the chat model will be tested with message sequences that
    include `ToolMessage` objects of this form.

    ```python
    @property
    def supports_image_tool_message(self) -> bool:
        return True
    ```

??? info "`supports_pdf_inputs`"

    Boolean property indicating whether the chat model supports PDF inputs.

    Defaults to `False`.

    If set to `True`, the chat model will be tested by inputting a
    `FileContentBlock` with the shape:

    ```python
    {
        "type": "file",
        "base64": "<base64 file data>",
        "mime_type": "application/pdf",
    }
    ```

    See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).

    ```python
    @property
    def supports_pdf_inputs(self) -> bool:
        return True
    ```

??? info "`supports_pdf_tool_message`"

    Boolean property indicating whether the chat model supports a `ToolMessage`
    that includes PDF content using the LangChain `FileContentBlock` format.

    Defaults to `False`.

    ```python
    ToolMessage(
        content=[
            {
                "type": "file",
                "base64": pdf_data,
                "mime_type": "application/pdf",
            },
        ],
        tool_call_id="1",
        name="random_pdf",
    )
    ```

    If set to `True`, the chat model will be tested with message sequences that
    include `ToolMessage` objects of this form.

    ```python
    @property
    def supports_pdf_tool_message(self) -> bool:
        return True
    ```

??? info "`supports_audio_inputs`"

    Boolean property indicating whether the chat model supports audio inputs.

    Defaults to `False`.

    If set to `True`, the chat model will be tested by inputting an
    `AudioContentBlock` with the shape:

    ```python
    {
        "type": "audio",
        "base64": "<base64 audio data>",
        "mime_type": "audio/wav",  # or appropriate MIME type
    }
    ```

    See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).

    ```python
    @property
    def supports_audio_inputs(self) -> bool:
        return True
    ```

    !!! warning
        This test downloads audio data from wikimedia.org. You may need to set the
        `LANGCHAIN_TESTS_USER_AGENT` environment variable to identify these tests,
        e.g.,

        ```bash
        export LANGCHAIN_TESTS_USER_AGENT="CoolBot/0.0 (https://example.org/coolbot/; coolbot@example.org) generic-library/0.0"
        ```

        Refer to the [Wikimedia Foundation User-Agent Policy](https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy).

??? info "`supports_video_inputs`"

    Boolean property indicating whether the chat model supports image inputs.

    Defaults to `False`.

    No current tests are written for this feature.

??? info "`returns_usage_metadata`"

    Boolean property indicating whether the chat model returns usage metadata
    on invoke and streaming responses.

    Defaults to `True`.

    `usage_metadata` is an optional dict attribute on `AIMessage` objects that track
    input and output tokens.

    [See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).

    ```python
    @property
    def returns_usage_metadata(self) -> bool:
        return False
    ```

    Models supporting `usage_metadata` should also return the name of the underlying
    model in the `response_metadata` of the `AIMessage`.

??? info "`supports_anthropic_inputs`"

    Boolean property indicating whether the chat model supports Anthropic-style
    inputs.

    Defaults to `False`.

    These inputs might feature "tool use" and "tool result" content blocks, e.g.,

    ```python
    [
        {"type": "text", "text": "Hmm let me think about that"},
        {
            "type": "tool_use",
            "input": {"fav_color": "green"},
            "id": "foo",
            "name": "color_picker",
        },
    ]
    ```

    If set to `True`, the chat model will be tested using content blocks of this
    form.

    ```python
    @property
    def supports_anthropic_inputs(self) -> bool:
        return True
    ```

??? info "`supported_usage_metadata_details`"

    Property controlling what usage metadata details are emitted in both invoke
    and stream.

    Defaults to `{"invoke": [], "stream": []}`.

    `usage_metadata` is an optional dict attribute on `AIMessage` objects that track
    input and output tokens.

    [See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).

    It includes optional keys `input_token_details` and `output_token_details`
    that can track usage details associated with special types of tokens, such as
    cached, audio, or reasoning.

    Only needs to be overridden if these details are supplied.

??? info "`enable_vcr_tests`"

    Property controlling whether to enable select tests that rely on
    [VCR](https://vcrpy.readthedocs.io/en/latest/) caching of HTTP calls, such
    as benchmarking tests.

    Defaults to `False`.

    To enable these tests, follow these steps:

    1. Override the `enable_vcr_tests` property to return `True`:

        ```python
        @property
        def enable_vcr_tests(self) -> bool:
            return True
        ```

    2. Configure VCR to exclude sensitive headers and other information from
        cassettes.

        !!! warning
            VCR will by default record authentication headers and other sensitive
            information in cassettes. Read below for how to configure what
            information is recorded in cassettes.

        To add configuration to VCR, add a `conftest.py` file to the `tests/`
        directory and implement the `vcr_config` fixture there.

        `langchain-tests` excludes the headers `'authorization'`,
        `'x-api-key'`, and `'api-key'` from VCR cassettes. To pick up this
        configuration, you will need to add `conftest.py` as shown below. You can
        also exclude additional headers, override the default exclusions, or apply
        other customizations to the VCR configuration. See example below:

        ```python title="tests/conftest.py"
        import pytest
        from langchain_tests.conftest import base_vcr_config

        _EXTRA_HEADERS = [
            # Specify additional headers to redact
            ("user-agent", "PLACEHOLDER"),
        ]

        def remove_response_headers(response: dict) -> dict:
            # If desired, remove or modify headers in the response.
            response["headers"] = {}
            return response

        @pytest.fixture(scope="session")
        def vcr_config() -> dict:
            """Extend the default configuration from langchain_tests."""
            config = base_vcr_config()
            config.setdefault("filter_headers", []).extend(_EXTRA_HEADERS)
            config["before_record_response"] = remove_response_headers

            return config
        ```

        ??? note "Compressing cassettes"

            `langchain-tests` includes a custom VCR serializer that compresses
            cassettes using gzip. To use it, register the `yaml.gz` serializer
            to your VCR fixture and enable this serializer in the config. See
            example below:

            ```python title="tests/conftest.py"
            import pytest
            from langchain_tests.conftest import (
                CustomPersister,
                CustomSerializer,
            )
            from langchain_tests.conftest import base_vcr_config
            from vcr import VCR

            _EXTRA_HEADERS = [
                # Specify additional headers to redact
                ("user-agent", "PLACEHOLDER"),
            ]

            def remove_response_headers(response: dict) -> dict:
                # If desired, remove or modify headers in the response.
                response["headers"] = {}
                return response

            @pytest.fixture(scope="session")
            def vcr_config() -> dict:
                """Extend the default configuration from langchain_tests."""
                config = base_vcr_config()
                config.setdefault("filter_headers", []).extend(_EXTRA_HEADERS)
                config["before_record_response"] = remove_response_headers
                # New: enable serializer and set file extension
                config["serializer"] = "yaml.gz"
                config["path_transformer"] = VCR.ensure_suffix(".yaml.gz")

                return config

            def pytest_recording_configure(config: dict, vcr: VCR) -> None:
                vcr.register_persister(CustomPersister())
                vcr.register_serializer("yaml.gz", CustomSerializer())
            ```

            You can inspect the contents of the compressed cassettes (e.g., to
            ensure no sensitive information is recorded) using

            ```bash
            gunzip -k /path/to/tests/cassettes/TestClass_test.yaml.gz
            ```

            ...or by using the serializer:

            ```python
            from langchain_tests.conftest import (
                CustomPersister,
                CustomSerializer,
            )

            cassette_path = "/path/to/tests/cassettes/TestClass_test.yaml.gz"
            requests, responses = CustomPersister().load_cassette(
                path, CustomSerializer()
            )
            ```

    3. Run tests to generate VCR cassettes.

        ```bash title="Example"
        uv run python -m pytest tests/integration_tests/test_chat_models.py::TestMyModel::test_stream_time
        ```

        This will generate a VCR cassette for the test in
        `tests/integration_tests/cassettes/`.

        !!! warning
            You should inspect the generated cassette to ensure that it does not
            contain sensitive information. If it does, you can modify the
            `vcr_config` fixture to exclude headers or modify the response
            before it is recorded.

        You can then commit the cassette to your repository. Subsequent test runs
        will use the cassette instead of making HTTP calls.

## Signature

```python
ChatModelIntegrationTests()
```

## Extends

- `ChatModelTests`

## Properties

- `standard_chat_model_params`

## Methods

- [`test_invoke()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_invoke)
- [`test_ainvoke()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_ainvoke)
- [`test_stream()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_stream)
- [`test_astream()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_astream)
- [`test_invoke_with_model_override()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_invoke_with_model_override)
- [`test_ainvoke_with_model_override()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_ainvoke_with_model_override)
- [`test_stream_with_model_override()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_stream_with_model_override)
- [`test_astream_with_model_override()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_astream_with_model_override)
- [`test_batch()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_batch)
- [`test_abatch()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_abatch)
- [`test_conversation()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_conversation)
- [`test_double_messages_conversation()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_double_messages_conversation)
- [`test_usage_metadata()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_usage_metadata)
- [`test_usage_metadata_streaming()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_usage_metadata_streaming)
- [`test_stop_sequence()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_stop_sequence)
- [`test_tool_calling()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_calling)
- [`test_tool_calling_async()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_calling_async)
- [`test_bind_runnables_as_tools()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_bind_runnables_as_tools)
- [`test_tool_message_histories_string_content()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_message_histories_string_content)
- [`test_tool_message_histories_list_content()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_message_histories_list_content)
- [`test_tool_choice()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_choice)
- [`test_tool_calling_with_no_arguments()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_calling_with_no_arguments)
- [`test_tool_message_error_status()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_tool_message_error_status)
- [`test_structured_few_shot_examples()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_structured_few_shot_examples)
- [`test_structured_output()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_structured_output)
- [`test_structured_output_async()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_structured_output_async)
- [`test_structured_output_pydantic_2_v1()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_structured_output_pydantic_2_v1)
- [`test_structured_output_optional_param()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_structured_output_optional_param)
- [`test_json_mode()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_json_mode)
- [`test_pdf_inputs()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_pdf_inputs)
- [`test_audio_inputs()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_audio_inputs)
- [`test_image_inputs()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_image_inputs)
- [`test_image_tool_message()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_image_tool_message)
- [`test_pdf_tool_message()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_pdf_tool_message)
- [`test_anthropic_inputs()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_anthropic_inputs)
- [`test_message_with_name()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_message_with_name)
- [`test_agent_loop()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_agent_loop)
- [`test_stream_time()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_stream_time)
- [`invoke_with_audio_input()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/invoke_with_audio_input)
- [`invoke_with_audio_output()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/invoke_with_audio_output)
- [`invoke_with_reasoning_output()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/invoke_with_reasoning_output)
- [`invoke_with_cache_read_input()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/invoke_with_cache_read_input)
- [`invoke_with_cache_creation_input()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/invoke_with_cache_creation_input)
- [`test_unicode_tool_call_integration()`](https://reference.langchain.com/python/langchain-tests/integration_tests/chat_models/ChatModelIntegrationTests/test_unicode_tool_call_integration)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/f0c5a28fa05adcda89aebcb449d897245ab21fa4/libs/standard-tests/langchain_tests/integration_tests/chat_models.py#L173)