Ask a question to get started
Enter to send•Shift+Enter new line
ChatGLM()
LLM
Endpoint URL to use.
Keyword arguments to pass to the model.
Max token allowed to pass to the model.
LLM model temperature from 0 to 10.
History of the conversation
Top P for nucleus sampling from 0 to 1
Whether to use history or not
ChatGLM LLM service.
Example:
.. code-block:: python
from langchain_community.llms import ChatGLM endpoint_url = ( "http://127.0.0.1:8000" ) ChatGLM_llm = ChatGLM( endpoint_url=endpoint_url )