Create an agent that uses OpenAI tools.
create_openai_tools_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate,
strict: bool | None = None
) -> RunnableExample:
from langchain_classic import hub
from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_openai_tools_agent,
)
prompt = hub.pull("hwchase17/openai-tools-agent")
model = ChatOpenAI()
tools = ...
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
"chat_history": [
HumanMessage(content="hi! my name is bob"),
AIMessage(content="Hello Bob! How can I assist you today?"),
],
}
)
Prompt:
The agent prompt must have an agent_scratchpad key that is a
MessagesPlaceholder. Intermediate agent actions and tool output
messages will be passed in here.
Here's an example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | LLM to use as the agent. |
tools* | Sequence[BaseTool] | Tools this agent has access to. |
prompt* | ChatPromptTemplate | The prompt to use. See Prompt section below for more on the expected input variables. |
strict | bool | None | Default: NoneWhether strict mode should be used for OpenAI tools. |