Get the number of tokens present in the text. Uses the model's tokenizer.
Useful for checking if an input will fit in a model's context window.
Example:
llm = ChatGoogleGenerativeAI(model="gemini-3.1-pro-preview")
num_tokens = llm.get_num_tokens("Hello, world!")
print(num_tokens)
# -> 4| Name | Type | Description |
|---|---|---|
text* | str | The string input to tokenize. |