Ask a question to get started
Enter to send•Shift+Enter new line
GooglePalm()
BaseLLM
BaseModel
Model name to use.
Run inference with this temperature. Must be in the closed interval [0.0, 1.0].
Decode using nucleus sampling: consider the smallest set of tokens whose probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
Decode using top-k sampling: consider the set of top_k most probable tokens. Must be positive.
Maximum number of tokens to include in a candidate. Must be greater than zero. If unset, will default to 64.
Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated.
The maximum number of retries to make when generating.
Returns whether a model is belongs to a Gemini family or not.
Get the namespace of the langchain object.
Validate api key, python package exists.
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model's context window.
DEPRECATED: Use langchain_google_genai.GoogleGenerativeAI instead.
langchain_google_genai.GoogleGenerativeAI
Google PaLM models.