Ask a question to get started
Enter to send•Shift+Enter new line
ChatGLM3()
LLM
Endpoint URL to use.
Keyword arguments to pass to the model.
Max token allowed to pass to the model.
LLM model temperature from 0 to 10.
Top P for nucleus sampling from 0 to 1
Series of messages for Chat input.
Whether to stream the results or not.
ChatGLM3 LLM service.