langchain.js
    Preparing search index...

    Interface ReactAgent<StructuredResponseFormat, ContextSchema, TMiddleware>

    LangChain Agents

    interface ReactAgent<
        StructuredResponseFormat extends
            Record<string, any>
            | ResponseFormatUndefined = Record<string, any>,
        ContextSchema extends AnyAnnotationRoot
        | BaseMessage = AnyAnnotationRoot,
        TMiddleware extends
            readonly AgentMiddleware<any, any, any>[] = readonly AgentMiddleware<
            any,
            any,
            any,
        >[],
    > {
        options: CreateAgentParams<StructuredResponseFormat, ContextSchema>;
        get graph(): AgentGraph<
            StructuredResponseFormat,
            ContextSchema,
            TMiddleware,
        >;
        drawMermaid(
            params?: {
                backgroundColor?: string;
                curveStyle?: string;
                nodeColors?: Record<string, string>;
                withStyles?: boolean;
                wrapLabelNWords?: number;
            },
        ): Promise<any>;
        drawMermaidPng(
            params?: {
                backgroundColor?: string;
                curveStyle?: string;
                nodeColors?: Record<string, string>;
                withStyles?: boolean;
                wrapLabelNWords?: number;
            },
        ): Promise<Uint8Array<any>>;
        invoke(
            state: InvokeStateParameter<TMiddleware>,
            config?: InvokeConfiguration<
                InferContextInput<ContextSchema> & InferMiddlewareContextInputs<
                    TMiddleware,
                >,
            >,
        ): Promise<MergedAgentState<StructuredResponseFormat, TMiddleware>>;
        stream(
            state: InvokeStateParameter<TMiddleware>,
            config?: StreamConfiguration<
                InferContextInput<ContextSchema> & InferMiddlewareContextInputs<
                    TMiddleware,
                >,
            >,
        ): Promise<IterableReadableStream<any>>;
    }

    Type Parameters

    • StructuredResponseFormat extends Record<string, any> | ResponseFormatUndefined = Record<string, any>
    • ContextSchema extends AnyAnnotationRoot | BaseMessage = AnyAnnotationRoot
    • TMiddleware extends readonly AgentMiddleware<any, any, any>[] = readonly AgentMiddleware<any, any, any>[]
    Index

    Properties

    options: CreateAgentParams<StructuredResponseFormat, ContextSchema>

    Accessors

    Methods

    • Draw the graph as a Mermaid string.

      Parameters

      • Optionalparams: {
            backgroundColor?: string;
            curveStyle?: string;
            nodeColors?: Record<string, string>;
            withStyles?: boolean;
            wrapLabelNWords?: number;
        }

        Parameters for the drawMermaid method.

        • OptionalbackgroundColor?: string

          The background color of the graph.

        • OptionalcurveStyle?: string

          The style of the graph's curves.

        • OptionalnodeColors?: Record<string, string>

          The colors of the graph's nodes.

        • OptionalwithStyles?: boolean

          Whether to include styles in the graph.

        • OptionalwrapLabelNWords?: number

          The maximum number of words to wrap in a node's label.

      Returns Promise<any>

      Mermaid string

    • Visualize the graph as a PNG image.

      Parameters

      • Optionalparams: {
            backgroundColor?: string;
            curveStyle?: string;
            nodeColors?: Record<string, string>;
            withStyles?: boolean;
            wrapLabelNWords?: number;
        }

        Parameters for the drawMermaidPng method.

        • OptionalbackgroundColor?: string

          The background color of the graph.

        • OptionalcurveStyle?: string

          The style of the graph's curves.

        • OptionalnodeColors?: Record<string, string>

          The colors of the graph's nodes.

        • OptionalwithStyles?: boolean

          Whether to include styles in the graph.

        • OptionalwrapLabelNWords?: number

          The maximum number of words to wrap in a node's label.

      Returns Promise<Uint8Array<any>>

      PNG image as a buffer

    • Executes the agent with the given state and returns the final state after all processing.

      This method runs the agent's entire workflow synchronously, including:

      • Processing the input messages through any configured middleware
      • Calling the language model to generate responses
      • Executing any tool calls made by the model
      • Running all middleware hooks (beforeModel, afterModel, etc.)

      Parameters

      • state: InvokeStateParameter<TMiddleware>

        The initial state for the agent execution. Can be:

        • An object containing messages array and any middleware-specific state properties
        • A Command object for more advanced control flow
      • Optionalconfig: InvokeConfiguration<
            InferContextInput<ContextSchema> & InferMiddlewareContextInputs<
                TMiddleware,
            >,
        >

        Optional runtime configuration including:

        • context

          The context for the agent execution.

        • configurable

          LangGraph configuration options like thread_id, run_id, etc.

        • store

          The store for the agent execution for persisting state, see more in Memory storage.

        • signal

          An optional AbortSignal for the agent execution.

        • recursionLimit

          The recursion limit for the agent execution.

      Returns Promise<MergedAgentState<StructuredResponseFormat, TMiddleware>>

      A Promise that resolves to the final agent state after execution completes. The returned state includes: - a messages property containing an array with all messages (input, AI responses, tool calls/results) - a structuredResponse property containing the structured response (if configured) - all state values defined in the middleware

      const agent = new ReactAgent({
      llm: myModel,
      tools: [calculator, webSearch],
      responseFormat: z.object({
      weather: z.string(),
      }),
      });

      const result = await agent.invoke({
      messages: [{ role: "human", content: "What's the weather in Paris?" }]
      });

      console.log(result.structuredResponse.weather); // outputs: "It's sunny and 75°F."
    • Executes the agent with streaming, returning an async iterable of events as they occur.

      This method runs the agent's workflow similar to invoke, but instead of waiting for completion, it streams events in real-time. This allows you to:

      • Display intermediate results to users as they're generated
      • Monitor the agent's progress through each step
      • Handle tool calls and results as they happen
      • Update UI with streaming responses from the LLM

      Parameters

      • state: InvokeStateParameter<TMiddleware>

        The initial state for the agent execution. Can be:

        • An object containing messages array and any middleware-specific state properties
        • A Command object for more advanced control flow
      • Optionalconfig: StreamConfiguration<
            InferContextInput<ContextSchema> & InferMiddlewareContextInputs<
                TMiddleware,
            >,
        >

        Optional runtime configuration including:

        • context

          The context for the agent execution.

        • configurable

          LangGraph configuration options like thread_id, run_id, etc.

        • store

          The store for the agent execution for persisting state, see more in Memory storage.

        • signal

          An optional AbortSignal for the agent execution.

        • streamMode

          The streaming mode for the agent execution, see more in Supported stream modes.

        • recursionLimit

          The recursion limit for the agent execution.

      Returns Promise<IterableReadableStream<any>>

      A Promise that resolves to an IterableReadableStream of events. Events include: - on_chat_model_start: When the LLM begins processing - on_chat_model_stream: Streaming tokens from the LLM - on_chat_model_end: When the LLM completes - on_tool_start: When a tool execution begins - on_tool_end: When a tool execution completes - on_chain_start: When middleware chains begin - on_chain_end: When middleware chains complete - And other LangGraph v2 stream events

      const agent = new ReactAgent({
      llm: myModel,
      tools: [calculator, webSearch]
      });

      const stream = await agent.stream({
      messages: [{ role: "human", content: "What's 2+2 and the weather in NYC?" }]
      });

      for await (const event of stream) {
      //
      }