Optional full TaskSpec dict {input_schema, output_schema}.
When set, takes precedence over task_output_schema and unlocks
structured input_schema validation. Required for the Task Group
API's structured-batch pattern. See
https://docs.parallel.ai/task-api/guides/specify-a-task.
Run a single Parallel Task synchronously and return the structured result.
This is the agent-friendly path: an LLM calls the tool with input, the
tool blocks until the Parallel Task Run completes, and returns a dict
containing the output, citations (basis), and run metadata.
For long-running deep-research tasks, prefer :class:ParallelDeepResearch,
which is a :class:~langchain_core.runnables.Runnable and returns the same
result shape.
Setup:
export PARALLEL_API_KEY="your-api-key"
Key init args:
processor: Literal[...]
Which Parallel processor to run. Defaults to "lite-fast" —
the -fast variants are 2-5x faster than their non-fast
counterparts at similar accuracy. Use "core" / "pro"
for deep research and "ultra" for the highest-quality
long-running tasks. Add -fast to any tier for
agent-loop-friendly latency.
output_schema: Optional[type[BaseModel] | dict | str]
If a pydantic class, the SDK parses the response into an instance
of the class. If a dict, it's used as the JSON schema. If a
string, it's used as the natural-language output description
(text output mode).
mcp_servers: Optional[list[McpServer]]
BYOMCP servers exposed to the run.
api_key: Optional[SecretStr]
Invocation:
from langchain_parallel import ParallelTaskRunTool
tool = ParallelTaskRunTool() # processor="lite-fast"
result = tool.invoke({"input": "Who founded SpaceX, in one sentence?"})
# The structured output is at result["output"]["content"]; per-field
# citations are at result["output"]["basis"]; the run id is at
# result["run"]["run_id"].
print(result["output"]["content"])
print(result["output"]["basis"])