RunnableSequence(
self,
Sequence of Runnable objects, where the output of one is the input of the next.
RunnableSequence is the most important composition operator in LangChain
as it is used in virtually every chain.
A RunnableSequence can be instantiated directly or more commonly by using the
| operator where either the left or right operands (or both) must be a
Runnable.
Any RunnableSequence automatically supports sync, async, batch.
The default implementations of batch and abatch utilize threadpools and
asyncio gather and will be faster than naive invocation of invoke or ainvoke
for IO bound Runnables.
Batching is implemented by invoking the batch method on each component of the
RunnableSequence in order.
A RunnableSequence preserves the streaming properties of its components, so if
all components of the sequence implement a transform method -- which
is the method that implements the logic to map a streaming input to a streaming
output -- then the sequence will be able to stream input to output!
If any component of the sequence does not implement transform then the streaming will only begin after this component is run. If there are multiple blocking components, streaming begins after the last one.
RunnableLambdas do not support transform by default! So if you need to
use a RunnableLambdas be careful about where you place them in a
RunnableSequence (if you need to use the stream/astream methods).
If you need arbitrary logic and need streaming, you can subclass
Runnable, and implement transform for whatever logic you need.
Here is a simple example that uses simple functions to illustrate the use of
RunnableSequence:
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1 | runnable_2
# Or equivalently:
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
Here's an example that uses streams JSON output generated by an LLM:
from langchain_core.output_parsers.json import SimpleJsonOutputParser
from langchain_openai import ChatOpenAI
prompt = PromptTemplate.from_template(
"In JSON format, give me a list of {topic} and their "
"corresponding names in French, Spanish and in a "
"Cat Language."
)
model = ChatOpenAI()
chain = prompt | model | SimpleJsonOutputParser()
async for chunk in chain.astream({"topic": "colors"}):
print("-") # noqa: T201
print(chunk, sep="", flush=True) # noqa: T201| Name | Type | Description |
|---|---|---|
steps | RunnableLike | Default: () |
name | str | None | Default: None |
first | Runnable[Any, Any |
Get a JSON schema that represents the input to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a list of prompts used by this Runnable.
Pipe Runnable objects.
Pick keys from the output dict of this Runnable.
Merge the Dict input with the output produced by the mapping argument.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Bind arguments to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Map a function to multiple iterables.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.
Default: NoneThe first |
middle | list[Runnable[Any, Any]] | None | Default: NoneThe middle |
last | Runnable[Any, Any] | None | Default: NoneThe last |
| last | Runnable[Any, Any] | None |
The steps to include in the sequence.
The name of the Runnable.
The first Runnable in the sequence.
The middle Runnable in the sequence.
The last Runnable in the sequence.
All the Runnables that make up the sequence in order.
The type of the input to the Runnable.
The type of the output of the Runnable.
Get the config specs of the Runnable.
Get the namespace of the LangChain object.
Return True as this class is serializable.
Get the input schema of the Runnable.
Get the output schema of the Runnable.
Get the graph representation of the Runnable.