A convenience method for creating a conversational retrieval agent.
create_conversational_retrieval_agent(
llm: BaseLanguageModel,
tools: list[BaseTool],
remember_intermediate_steps: bool = True,
memory_key: str = 'chat_history',
system_message: SystemMessage | None = None,
verbose: bool = False,
max_token_limit: int = 2000,
**kwargs: Any = {}
) -> AgentExecutor| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | The language model to use, should be |
tools* | list[BaseTool] | A list of tools the agent has access to |
remember_intermediate_steps | bool | Default: TrueWhether the agent should remember intermediate steps or not. Intermediate steps refer to prior action/observation pairs from previous questions. The benefit of remembering these is if there is relevant information in there, the agent can use it to answer follow up questions. The downside is it will take up more tokens. |
memory_key | str | Default: 'chat_history'The name of the memory key in the prompt. |
system_message | SystemMessage | None | Default: NoneThe system message to use. By default, a basic one will be used. |
verbose | bool | Default: FalseWhether or not the final AgentExecutor should be verbose or not. |
max_token_limit | int | Default: 2000The max number of tokens to keep around in memory. |
**kwargs | Any | Default: {}Additional keyword arguments to pass to the |