Chat model that uses the LiteLLM API.
LiteLLM Router as LangChain Model.
Load documents using LiteLLM proxy's OCR endpoint.
This loader makes HTTP requests to a LiteLLM proxy server configured with Azure Document Intelligence (or other OCR providers). The proxy handles all provider-specific authentication and configuration.
LiteLLM embedding model.
Uses litellm.embedding() to support 100+ providers through a unified
interface. All provider configuration (api_key, api_base, etc.) can be
passed explicitly—no environment variables required.
LiteLLM Router-backed embedding model.
Wraps a litellm.Router instance to provide load-balanced embedding
calls across multiple deployments of the same model.