Methods for creating function specs in the style of OpenAI Functions.
Decorator to mark a function, a class, or a property as beta.
When marking a classmethod, a staticmethod, or a property, the @beta decorator
should go under @classmethod and @staticmethod (i.e., beta should directly
decorate the underlying callable), but over @property.
When marking a class C intended to be used as a base class in a multiple
inheritance hierarchy, C must define an __init__ method (if C instead
inherited its __init__ from its own base class, then @beta would mess up
__init__ inheritance when installing its own (annotation-emitting) C.__init__).
Resolve and inline JSON Schema $ref references in a schema object.
This function processes a JSON Schema and resolves all $ref references by
replacing them with the actual referenced content.
Handles both simple references and complex cases like circular references and mixed
$ref objects that contain additional properties alongside the $ref.
Check if the given class is a subclass of Pydantic BaseModel.
Check if the given class is a subclass of any of the following:
pydantic.BaseModel in Pydantic 2.xpydantic.v1.BaseModel in Pydantic 2.xConvert a raw function/class to an OpenAI function.
Convert a tool-like object to an OpenAI tool schema.
Convert a schema representation to a JSON schema.
Convert an example into a list of messages that can be fed into an LLM.
This code is an adapter that converts a single example to a list of messages that can be fed into a chat model.
The list of messages per example by default corresponds to:
HumanMessage: contains the content from which content should be extracted.AIMessage: contains the extracted information from the modelToolMessage: contains confirmation to the model that the model requested a
tool correctly.If ai_response is specified, there will be a final AIMessage with that
response.
The ToolMessage is required because some chat models are hyper-optimized for
agents rather than for an extraction use case.
Message from an AI.
An AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both the raw output as returned by the model and standardized fields (e.g., tool calls, usage metadata) added by the LangChain framework.
Base abstract message class.
Messages are the inputs and outputs of a chat model.
Examples include HumanMessage,
AIMessage, and
SystemMessage.
Message from the user.
A HumanMessage is a message that is passed in from a user to the model.
Message for passing the result of executing a tool back to a model.
ToolMessage objects contain the result of a tool invocation. Typically, the result
is encoded inside the content field.
tool_call_id is used to associate the tool call request with the tool call
response. Useful in situations where a chat model is able to request multiple tool
calls in parallel.
Base class for all LangChain tools.
This abstract class defines the interface that all LangChain tools must implement.
Tools are components that can be called by agents to perform specific actions.
Representation of a callable function to send to an LLM.
Representation of a callable function to the OpenAI API.