Skip to content

langchain-deepseek

PyPI - Version PyPI - License PyPI - Downloads

Reference docs

This page contains reference documentation for DeepSeek. See the docs for conceptual guides, tutorials, and examples on using ChatDeepSeek.

langchain_deepseek

LangChain DeepSeek integration.

ChatDeepSeek

Bases: BaseChatOpenAI

DeepSeek chat model integration to access models hosted in DeepSeek's API.

Setup

Install langchain-deepseek and set environment variable DEEPSEEK_API_KEY.

pip install -U langchain-deepseek
export DEEPSEEK_API_KEY="your-api-key"

Key init args — completion params: model: Name of DeepSeek model to use, e.g. 'deepseek-chat'. temperature: Sampling temperature. max_tokens: Max number of tokens to generate.

Key init args — client params: timeout: Timeout for requests. max_retries: Max number of retries. api_key: DeepSeek API key. If not passed in will be read from env var DEEPSEEK_API_KEY.

See full list of supported init args and their descriptions in the params section.

Instantiate
from langchain_deepseek import ChatDeepSeek

model = ChatDeepSeek(
    model="...",
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # api_key="...",
    # other params...
)
Invoke
messages = [
    ("system", "You are a helpful translator. Translate the user sentence to French."),
    ("human", "I love programming."),
]
model.invoke(messages)
Stream

for chunk in model.stream(messages):
    print(chunk.text, end="")
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full

Async
await model.ainvoke(messages)

# stream:
# async for chunk in (await model.astream(messages))

# batch:
# await model.abatch([messages])
Tool calling
from pydantic import BaseModel, Field


class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


model_with_tools = model.bind_tools([GetWeather, GetPopulation])
ai_msg = model_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls

See ChatDeepSeek.bind_tools() method for more.

Structured output
from typing import Optional

from pydantic import BaseModel, Field


class Joke(BaseModel):
    '''Joke to tell user.'''

    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: int | None = Field(description="How funny the joke is, from 1 to 10")


structured_model = model.with_structured_output(Joke)
structured_model.invoke("Tell me a joke about cats")

See ChatDeepSeek.with_structured_output() for more.

Token usage

ai_msg = model.invoke(messages)
ai_msg.usage_metadata
{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}

METHOD DESCRIPTION
validate_environment

Validate necessary environment vars and client params.

bind_tools

Bind tool-like objects to this chat model.

with_structured_output

Model wrapper that returns outputs formatted to match the given schema.

model_name class-attribute instance-attribute

model_name: str = Field(alias='model')

The name of the model

api_key class-attribute instance-attribute

api_key: SecretStr | None = Field(
    default_factory=secret_from_env("DEEPSEEK_API_KEY", default=None)
)

DeepSeek API key

api_base class-attribute instance-attribute

api_base: str = Field(
    default_factory=from_env("DEEPSEEK_API_BASE", default=DEFAULT_API_BASE)
)

DeepSeek API base URL

lc_secrets property

lc_secrets: dict[str, str]

A map of constructor argument names to secret ids.

validate_environment

validate_environment() -> Self

Validate necessary environment vars and client params.

bind_tools

bind_tools(
    tools: Sequence[dict[str, Any] | type | Callable | BaseTool],
    *,
    tool_choice: dict | str | bool | None = None,
    strict: bool | None = None,
    parallel_tool_calls: bool | None = None,
    **kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]

Bind tool-like objects to this chat model.

Overrides parent to use beta endpoint when strict=True.

PARAMETER DESCRIPTION
tools

A list of tool definitions to bind to this chat model.

TYPE: Sequence[dict[str, Any] | type | Callable | BaseTool]

tool_choice

Which tool to require the model to call.

TYPE: dict | str | bool | None DEFAULT: None

strict

If True, uses beta API for strict schema validation.

TYPE: bool | None DEFAULT: None

parallel_tool_calls

Set to False to disable parallel tool use.

TYPE: bool | None DEFAULT: None

**kwargs

Additional parameters passed to parent bind_tools.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
Runnable[LanguageModelInput, AIMessage]

A Runnable that takes same inputs as a chat model.

with_structured_output

with_structured_output(
    schema: _DictOrPydanticClass | None = None,
    *,
    method: Literal[
        "function_calling", "json_mode", "json_schema"
    ] = "function_calling",
    include_raw: bool = False,
    strict: bool | None = None,
    **kwargs: Any,
) -> Runnable[LanguageModelInput, _DictOrPydantic]

Model wrapper that returns outputs formatted to match the given schema.

PARAMETER DESCRIPTION
schema

The output schema. Can be passed in as:

  • An OpenAI function/tool schema,
  • A JSON Schema,
  • A TypedDict class,
  • Or a Pydantic class.

If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated.

See langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.

TYPE: _DictOrPydanticClass | None DEFAULT: None

method

The method for steering model generation, one of:

TYPE: Literal['function_calling', 'json_mode', 'json_schema'] DEFAULT: 'function_calling'

include_raw

If False then only the parsed structured output is returned.

If an error occurs during model output parsing it will be raised.

If True then both the raw model response (a BaseMessage) and the parsed model response will be returned.

If an error occurs during output parsing it will be caught and returned as well.

The final output is always a dict with keys 'raw', 'parsed', and 'parsing_error'.

TYPE: bool DEFAULT: False

strict

Whether to enable strict schema adherence when generating the function call. When set to True, DeepSeek will use the beta API endpoint (https://api.deepseek.com/beta) for strict schema validation. This ensures model outputs exactly match the defined schema.

Note

DeepSeek's strict mode requires all object properties to be marked as required in the schema.

TYPE: bool | None DEFAULT: None

kwargs

Additional keyword args aren't supported.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
Runnable[LanguageModelInput, _DictOrPydantic]

A Runnable that takes same inputs as a langchain_core.language_models.chat.BaseChatModel. If include_raw is False and schema is a Pydantic class, Runnable outputs an instance of schema (i.e., a Pydantic object). Otherwise, if include_raw is False then Runnable outputs a dict.

If include_raw is True, then Runnable outputs a dict with keys:

  • 'raw': BaseMessage
  • 'parsed': None if there was a parsing error, otherwise the type depends on the schema as described above.
  • 'parsing_error': BaseException | None