Ask a question to get started
Enter to sendā¢Shift+Enter new line
The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.
max_tokens: int = 256