create_openai_functions_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate
)| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | |
tools* | Sequence[BaseTool] | |
prompt* | ChatPromptTemplate |
Create an agent that uses OpenAI function calling.
LLM to use as the agent. Should work with OpenAI function calling, so either be an OpenAI model that supports that or a wrapper of a different model that adds in equivalent support.
Tools this agent has access to.
The prompt to use. See Prompt section below for more.
Example:
Creating an agent with no memory
from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_openai_functions_agent,
)
from langchain_classic import hub
prompt = hub.pull("hwchase17/openai-functions-agent")
model = ChatOpenAI()
tools = ...
agent = create_openai_functions_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
"chat_history": [
HumanMessage(content="hi! my name is bob"),
AIMessage(content="Hello Bob! How can I assist you today?"),
],
}
)
Prompt:
The agent prompt must have an agent_scratchpad key that is a
MessagesPlaceholder. Intermediate agent actions and tool output
messages will be passed in here.
Here's an example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)