# SambaStudio

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/llms/sambanova/SambaStudio)

SambaStudio large language models.

## Signature

```python
SambaStudio(
    self,
    **kwargs: Any = {},
)
```

## Description

**Setup:**

To use, you should have the environment variables
``SAMBASTUDIO_URL`` set with your SambaStudio environment URL.
``SAMBASTUDIO_API_KEY``  set with your SambaStudio endpoint API key.
https://sambanova.ai/products/enterprise-ai-platform-sambanova-suite
read extra documentation in https://docs.sambanova.ai/sambastudio/latest/index.html
Example:
.. code-block:: python
    from langchain_community.llms.sambanova  import SambaStudio
    SambaStudio(
        sambastudio_url="your-SambaStudio-environment-URL",
        sambastudio_api_key="your-SambaStudio-API-key,
        model_kwargs={
            "model" : model or expert name (set for Bundle endpoints),
            "max_tokens" : max number of tokens to generate,
            "temperature" : model temperature,
            "top_p" : model top p,
            "top_k" : model top k,
            "do_sample" : wether to do sample
            "process_prompt": wether to process prompt
                (set for Bundle generic v1 and v2 endpoints)
        },
    )

Key init args — completion params:
    model: str
        The name of the model to use, e.g., Meta-Llama-3-70B-Instruct-4096
        (set for Bundle endpoints).
    streaming: bool
        Whether to use streaming handler when using non streaming methods
    model_kwargs: dict
        Extra Key word arguments to pass to the model:
            max_tokens: int
                max tokens to generate
            temperature: float
                model temperature
            top_p: float
                model top p
            top_k: int
                model top k
            do_sample: bool
                wether to do sample
            process_prompt:
                wether to process prompt
                (set for Bundle generic v1 and v2 endpoints)
Key init args — client params:
    sambastudio_url: str
        SambaStudio endpoint Url
    sambastudio_api_key: str
        SambaStudio endpoint api key

**Instantiate:**

.. code-block:: python

from langchain_community.llms import SambaStudio

llm = SambaStudio=(
    sambastudio_url = set with your SambaStudio deployed endpoint URL,
    sambastudio_api_key = set with your SambaStudio deployed endpoint Key,
    model_kwargs = {
        "model" : model or expert name (set for Bundle endpoints),
        "max_tokens" : max number of tokens to generate,
        "temperature" : model temperature,
        "top_p" : model top p,
        "top_k" : model top k,
        "do_sample" : wether to do sample
        "process_prompt" : wether to process prompt
            (set for Bundle generic v1 and v2 endpoints)
    }
)

**Invoke:**

.. code-block:: python
prompt = "tell me a joke"
response = llm.invoke(prompt)

**Stream:**

.. code-block:: python

for chunk in llm.stream(prompt):
    print(chunk, end="", flush=True)

**Async:**

.. code-block:: python

response = llm.ainvoke(prompt)
await response

## Extends

- `LLM`

## Constructors

```python
__init__(
    self,
    **kwargs: Any = {},
) -> None
```


## Properties

- `sambastudio_url`
- `sambastudio_api_key`
- `base_url`
- `streaming_url`
- `streaming`
- `model_kwargs`
- `lc_secrets`

## Methods

- [`is_lc_serializable()`](https://reference.langchain.com/python/langchain-community/llms/sambanova/SambaStudio/is_lc_serializable)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/a6a6079511ac8a5c1293337f88096b8641562e77/libs/community/langchain_community/llms/sambanova.py#L13)