# BaseOpenAI

> **Class** in `langchain_openai`

📖 [View in docs](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI)

Base OpenAI large language model class.

## Signature

```python
BaseOpenAI()
```

## Description

**Setup:**

Install `langchain-openai` and set environment variable `OPENAI_API_KEY`.

```bash
pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"
```

Key init args — completion params:
    model_name:
        Name of OpenAI model to use.
    temperature:
        Sampling temperature.
    max_tokens:
        Max number of tokens to generate.
    top_p:
        Total probability mass of tokens to consider at each step.
    frequency_penalty:
        Penalizes repeated tokens according to frequency.
    presence_penalty:
        Penalizes repeated tokens.
    n:
        How many completions to generate for each prompt.
    best_of:
        Generates best_of completions server-side and returns the "best".
    logit_bias:
        Adjust the probability of specific tokens being generated.
    seed:
        Seed for generation.
    logprobs:
        Include the log probabilities on the logprobs most likely output tokens.
    streaming:
        Whether to stream the results or not.

Key init args — client params:
    openai_api_key:
        OpenAI API key. If not passed in will be read from env var
        `OPENAI_API_KEY`.
    openai_api_base:
        Base URL path for API requests, leave blank if not using a proxy or
        service emulator.
    openai_organization:
        OpenAI organization ID. If not passed in will be read from env
        var `OPENAI_ORG_ID`.
    request_timeout:
        Timeout for requests to OpenAI completion API.
    max_retries:
        Maximum number of retries to make when generating.
    batch_size:
        Batch size to use when passing multiple documents to generate.

See full list of supported init args and their descriptions in the params section.

**Instantiate:**

```python
from langchain_openai.llms.base import BaseOpenAI

model = BaseOpenAI(
    model_name="gpt-3.5-turbo-instruct",
    temperature=0.7,
    max_tokens=256,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
    # openai_api_key="...",
    # openai_api_base="...",
    # openai_organization="...",
    # other params...
)
```

**Invoke:**

```python
input_text = "The meaning of life is "
response = model.invoke(input_text)
print(response)
```

```txt
"a philosophical question that has been debated by thinkers and
scholars for centuries."
```

**Stream:**

```python
for chunk in model.stream(input_text):
    print(chunk, end="")
```
```txt
a philosophical question that has been debated by thinkers and
scholars for centuries.
```

**Async:**

```python
response = await model.ainvoke(input_text)

# stream:
# async for chunk in model.astream(input_text):
#     print(chunk, end="")

# batch:
# await model.abatch([input_text])
```
```
"a philosophical question that has been debated by thinkers and
scholars for centuries."
```

## Extends

- `BaseLLM`

## Properties

- `client`
- `async_client`
- `model_name`
- `temperature`
- `max_tokens`
- `top_p`
- `frequency_penalty`
- `presence_penalty`
- `n`
- `best_of`
- `model_kwargs`
- `openai_api_key`
- `openai_api_base`
- `openai_organization`
- `openai_proxy`
- `batch_size`
- `request_timeout`
- `logit_bias`
- `max_retries`
- `seed`
- `logprobs`
- `streaming`
- `allowed_special`
- `disallowed_special`
- `tiktoken_model_name`
- `default_headers`
- `default_query`
- `http_client`
- `http_async_client`
- `extra_body`
- `model_config`
- `max_context_size`

## Methods

- [`build_extra()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/build_extra)
- [`validate_environment()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/validate_environment)
- [`get_sub_prompts()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/get_sub_prompts)
- [`create_llm_result()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/create_llm_result)
- [`get_token_ids()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/get_token_ids)
- [`modelname_to_contextsize()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/modelname_to_contextsize)
- [`max_tokens_for_prompt()`](https://reference.langchain.com/python/langchain-openai/llms/base/BaseOpenAI/max_tokens_for_prompt)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/8fec4e7ceee2c368b068c49f9fed453276e210e7/libs/partners/openai/langchain_openai/llms/base.py#L53)