BaseOpenAI()Calculate the maximum number of tokens possible to generate for a prompt.
Base OpenAI large language model class.
Setup:
Install langchain-openai and set environment variable OPENAI_API_KEY.
pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"
Key init args — completion params: model_name: Name of OpenAI model to use. temperature: Sampling temperature. max_tokens: Max number of tokens to generate. top_p: Total probability mass of tokens to consider at each step. frequency_penalty: Penalizes repeated tokens according to frequency. presence_penalty: Penalizes repeated tokens. n: How many completions to generate for each prompt. best_of: Generates best_of completions server-side and returns the "best". logit_bias: Adjust the probability of specific tokens being generated. seed: Seed for generation. logprobs: Include the log probabilities on the logprobs most likely output tokens. streaming: Whether to stream the results or not.
Key init args — client params:
openai_api_key:
OpenAI API key. If not passed in will be read from env var
OPENAI_API_KEY.
openai_api_base:
Base URL path for API requests, leave blank if not using a proxy or
service emulator.
openai_organization:
OpenAI organization ID. If not passed in will be read from env
var OPENAI_ORG_ID.
request_timeout:
Timeout for requests to OpenAI completion API.
max_retries:
Maximum number of retries to make when generating.
batch_size:
Batch size to use when passing multiple documents to generate.
See full list of supported init args and their descriptions in the params section.
Instantiate:
from langchain_openai.llms.base import BaseOpenAI
model = BaseOpenAI(
model_name="gpt-3.5-turbo-instruct",
temperature=0.7,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
# openai_api_key="...",
# openai_api_base="...",
# openai_organization="...",
# other params...
)
Invoke:
input_text = "The meaning of life is "
response = model.invoke(input_text)
print(response)
"a philosophical question that has been debated by thinkers and
scholars for centuries."
Stream:
for chunk in model.stream(input_text):
print(chunk, end="")
a philosophical question that has been debated by thinkers and
scholars for centuries.
Async:
response = await model.ainvoke(input_text)
# stream:
# async for chunk in model.astream(input_text):
# print(chunk, end="")
# batch:
# await model.abatch([input_text])
"a philosophical question that has been debated by thinkers and
scholars for centuries."