OpenAI()What sampling temperature to use.
Build extra kwargs from additional params that were passed in.
Validate that api key and python package exists in environment.
Get the sub prompts for llm call.
Create the LLMResult from the choices and prompts.
Get the tokens present in the text with tiktoken package.
Calculate the maximum number of tokens possible to generate for a model.
Calculate the maximum number of tokens possible to generate for a prompt.
OpenAI completion model integration.
Setup:
Install langchain-openai and set environment variable OPENAI_API_KEY.
pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"
Key init args — completion params:
model:
Name of OpenAI model to use.
temperature:
Sampling temperature.
max_tokens:
Max number of tokens to generate.
logprobs:
Whether to return logprobs.
stream_options:
Configure streaming outputs, like whether to return token usage when
streaming ({"include_usage": True}).
Key init args — client params:
timeout:
Timeout for requests.
max_retries:
Max number of retries.
api_key:
OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY.
base_url:
Base URL for API requests. Only specify if using a proxy or service
emulator.
organization:
OpenAI organization ID. If not passed in will be read from env
var OPENAI_ORG_ID.
See full list of supported init args and their descriptions in the params section.
Instantiate:
from langchain_openai import OpenAI
model = OpenAI(
model="gpt-3.5-turbo-instruct",
temperature=0,
max_retries=2,
# api_key="...",
# base_url="...",
# organization="...",
# other params...
)
Invoke:
input_text = "The meaning of life is "
model.invoke(input_text)
"a philosophical question that has been debated by thinkers and scholars for centuries."
Stream:
for chunk in model.stream(input_text):
print(chunk, end="|")
a| philosophical| question| that| has| been| debated| by| thinkers| and| scholars| for| centuries|.
"".join(model.stream(input_text))
"a philosophical question that has been debated by thinkers and scholars for centuries."
Async:
await model.ainvoke(input_text)
# stream:
# async for chunk in (await model.astream(input_text)):
# print(chunk)
# batch:
# await model.abatch([input_text])
"a philosophical question that has been debated by thinkers and scholars for centuries."