langchain-deepseek
¶
Modules:
Name | Description |
---|---|
chat_models |
DeepSeek chat models. |
Classes:
Name | Description |
---|---|
ChatDeepSeek |
DeepSeek chat model integration to access models hosted in DeepSeek's API. |
ChatDeepSeek
¶
Bases: BaseChatOpenAI
DeepSeek chat model integration to access models hosted in DeepSeek's API.
Setup
Install langchain-deepseek
and set environment variable DEEPSEEK_API_KEY
.
.. code-block:: bash
pip install -U langchain-deepseek
export DEEPSEEK_API_KEY="your-api-key"
Key init args — completion params: model: str Name of DeepSeek model to use, e.g. "deepseek-chat". temperature: float Sampling temperature. max_tokens: Optional[int] Max number of tokens to generate.
Key init args — client params: timeout: Optional[float] Timeout for requests. max_retries: int Max number of retries. api_key: Optional[str] DeepSeek API key. If not passed in will be read from env var DEEPSEEK_API_KEY.
See full list of supported init args and their descriptions in the params section.
Instantiate
.. code-block:: python
from langchain_deepseek import ChatDeepSeek
llm = ChatDeepSeek(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)
Invoke
.. code-block:: python
messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
llm.invoke(messages)
Stream
.. code-block:: python
for chunk in llm.stream(messages):
print(chunk.text, end="")
.. code-block:: python
stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
Async
.. code-block:: python
await llm.ainvoke(messages)
# stream:
# async for chunk in (await llm.astream(messages))
# batch:
# await llm.abatch([messages])
Tool calling
.. code-block:: python
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
See ChatDeepSeek.bind_tools()
method for more.
Structured output
.. code-block:: python
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
See ChatDeepSeek.with_structured_output()
for more.
Token usage
.. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.usage_metadata
.. code-block:: python
{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}
Response metadata .. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.response_metadata
Methods:
Name | Description |
---|---|
with_structured_output |
Model wrapper that returns outputs formatted to match the given schema. |
Attributes:
Name | Type | Description |
---|---|---|
model_name |
str
|
The name of the model |
api_key |
Optional[SecretStr]
|
DeepSeek API key |
api_base |
str
|
DeepSeek API base URL |
lc_secrets |
dict[str, str]
|
A map of constructor argument names to secret ids. |
model_name
class-attribute
instance-attribute
¶
model_name: str = Field(alias='model')
The name of the model
api_key
class-attribute
instance-attribute
¶
api_key: Optional[SecretStr] = Field(
default_factory=secret_from_env(
"DEEPSEEK_API_KEY", default=None
)
)
DeepSeek API key
api_base
class-attribute
instance-attribute
¶
api_base: str = Field(
default_factory=from_env(
"DEEPSEEK_API_BASE", default=DEFAULT_API_BASE
)
)
DeepSeek API base URL
with_structured_output
¶
with_structured_output(
schema: Optional[_DictOrPydanticClass] = None,
*,
method: Literal[
"function_calling", "json_mode", "json_schema"
] = "function_calling",
include_raw: bool = False,
strict: Optional[bool] = None,
**kwargs: Any
) -> Runnable[LanguageModelInput, _DictOrPydantic]
Model wrapper that returns outputs formatted to match the given schema.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
schema
|
Optional[_DictOrPydanticClass]
|
The output schema. Can be passed in as:
If |
None
|
method
|
Literal['function_calling', 'json_mode', 'json_schema']
|
The method for steering model generation, one of:
Behavior changed in 0.1.3 Added support for |
'function_calling'
|
include_raw
|
bool
|
If False then only the parsed structured output is returned. If
an error occurs during model output parsing it will be raised. If True
then both the raw model response (a BaseMessage) and the parsed model
response will be returned. If an error occurs during output parsing it
will be caught and returned as well. The final output is always a dict
with keys |
False
|
strict
|
Optional[bool]
|
Whether to enable strict schema adherence when generating the function call. This parameter is included for compatibility with other chat models, and if specified will be passed to the Chat Completions API in accordance with the OpenAI API specification. However, the DeepSeek API may ignore the parameter. |
None
|
kwargs
|
Any
|
Additional keyword args aren't supported. |
{}
|
Returns:
Type | Description |
---|---|
Runnable[LanguageModelInput, _DictOrPydantic]
|
A Runnable that takes same inputs as a |
Runnable[LanguageModelInput, _DictOrPydantic]
|
If |
Runnable[LanguageModelInput, _DictOrPydantic]
|
an instance of |
Runnable[LanguageModelInput, _DictOrPydantic]
|
If |
Runnable[LanguageModelInput, _DictOrPydantic]
|
|
Runnable[LanguageModelInput, _DictOrPydantic]
|
|
Runnable[LanguageModelInput, _DictOrPydantic]
|
|