Raise deprecation warning if callback_manager is used.
Set the chain verbosity.
Asynchronously execute the chain.
Validate and prepare chain outputs, and save info about this run to memory.
Validate and prepare chain outputs, and save info about this run to memory.
Prepare chain inputs, including adding inputs from memory.
Prepare chain inputs, including adding inputs from memory.
Convenience method for executing chain.
Convenience method for executing chain.
Return dictionary representation of agent.
Save the agent.
Utilize the LLM generate method for speed gains.
Chain that interprets a prompt and executes python code to do math.
This class is deprecated. See below for a replacement implementation using LangGraph. The benefits of this implementation are:
Install LangGraph with:
pip install -U langgraph
import math
from typing import Annotated, Sequence
from langchain_core.messages import BaseMessage
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt.tool_node import ToolNode
import numexpr
from typing_extensions import TypedDict
@tool
def calculator(expression: str) -> str:
"""Calculate expression using Python's numexpr library.
Expression should be a single line mathematical expression
that solves the problem.""" local_dict = {"pi": math.pi, "e": math.e} return str( numexpr.evaluate( expression.strip(), global_dict={}, # restrict access to globals local_dict=local_dict, # add common mathematical functions ) )
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [calculator]
model_with_tools = model.bind_tools(tools, tool_choice="any")
class ChainState(TypedDict):
"""LangGraph state."""
messages: Annotated[Sequence[BaseMessage], add_messages]
async def acall_chain(state: ChainState, config: RunnableConfig):
last_message = state["messages"][-1]
response = await model_with_tools.ainvoke(state["messages"], config)
return {"messages": [response]}
async def acall_model(state: ChainState, config: RunnableConfig):
response = await model.ainvoke(state["messages"], config)
return {"messages": [response]}
graph_builder = StateGraph(ChainState)
graph_builder.add_node("call_tool", acall_chain)
graph_builder.add_node("execute_tool", ToolNode(tools))
graph_builder.add_node("call_model", acall_model)
graph_builder.set_entry_point("call_tool")
graph_builder.add_edge("call_tool", "execute_tool")
graph_builder.add_edge("execute_tool", "call_model")
graph_builder.add_edge("call_model", END)
chain = graph_builder.compile()
example_query = "What is 551368 divided by 82"
events = chain.astream(
{"messages": [("user", example_query)]},
stream_mode="values",
)
async for event in events:
event["messages"][-1].pretty_print()
================================ Human Message =================================
What is 551368 divided by 82
================================== Ai Message ==================================
Tool Calls:
calculator (call_MEiGXuJjJ7wGU4aOT86QuGJS)
Call ID: call_MEiGXuJjJ7wGU4aOT86QuGJS
Args:
expression: 551368 / 82
================================= Tool Message =================================
Name: calculator
6724.0
================================== Ai Message ==================================
551368 divided by 82 equals 6724.
Example:
from langchain_classic.chains import LLMMathChain
from langchain_openai import OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())