ChatOCIGenAI()Authentication type, could be
The name of the profile in ~/.oci/config
Path to the config file.
embedding provider to use (eg: openai,google etc.)
Holds any model parameters valid for create call not explicitly specified.
service endpoint url
OCID of compartment
Whether to stream back partial progress
ChatOCIGenAI chat model integration.
Setup:
Install langchain-community and the oci sdk.
.. code-block:: bash
pip install -U langchain-community oci
Key init args — completion params: model_id: str Id of the OCIGenAI chat model to use, e.g., cohere.command-r-16k. is_stream: bool Whether to stream back partial progress model_kwargs: Optional[Dict] Keyword arguments to pass to the specific model used, e.g., temperature, max_tokens.
Key init args — client params: service_endpoint: str The endpoint URL for the OCIGenAI service, e.g., https://inference.generativeai.us-chicago-1.oci.oraclecloud.com. compartment_id: str The compartment OCID. auth_type: str The authentication type to use, e.g., API_KEY (default), SECURITY_TOKEN, INSTANCE_PRINCIPAL, RESOURCE_PRINCIPAL. auth_profile: Optional[str] The name of the profile in ~/.oci/config, if not specified , DEFAULT will be used. auth_file_location: Optional[str] Path to the config file, If not specified, ~/.oci/config will be used. provider: str Provider name of the model. Default to None, will try to be derived from the model_id otherwise, requires user input. See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
from langchain_community.chat_models import ChatOCIGenAI
chat = ChatOCIGenAI( model_id="cohere.command-r-16k", service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com", compartment_id="MY_OCID", model_kwargs={"temperature": 0.7, "max_tokens": 500}, )
Invoke:
.. code-block:: python messages = [ SystemMessage(content="your are an AI assistant."), AIMessage(content="Hi there human!"), HumanMessage(content="tell me a joke."), ] response = chat.invoke(messages)
Stream:
.. code-block:: python
for r in chat.stream(messages): print(r.content, end="", flush=True)
Response metadata .. code-block:: python
response = chat.invoke(messages) print(response.response_metadata)