Test that model does not fail when invoked with the stop parameter.
The stop parameter is a standard parameter for stopping generation at a
certain token.
This should pass for all integrations.
If this test fails, check that the function signature for _generate
(as well as _stream and async variants) accepts the stop parameter:
def _generate(
self,
messages: List[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
test_stop_sequence(
self,
model: BaseChatModel,
) -> None