Model name to use.
A non-negative float that tunes the degree of randomness in generation.
Cohere API key. If not provided, will be read from the environment variable.
Whether to stream the results.
Identifier for the application making the request.
Timeout in seconds for the Cohere API request.
Override the default Cohere API URL.
Implements the BaseChatModel (and BaseLanguageModel) interface with Cohere's
large language models.
Find out more about us at https://cohere.com and https://huggingface.co/CohereForAI
This implementation uses the Chat API - see https://docs.cohere.com/reference/chat
To use this you'll need to a Cohere API key - either pass it to cohere_api_key
parameter or set the COHERE_API_KEY environment variable.
API keys are available on https://cohere.com - it's free to sign up and trial API keys work with this implementation.
Basic Example:
from langchain_cohere import ChatCohere
from langchain_core.messages import HumanMessage
llm = ChatCohere(cohere_api_key="{API KEY}")
message = [HumanMessage(content="Hello, can you introduce yourself?")]
print(llm.invoke(message).content)