Skip to content

langchain-nomic

Nomic partner integration for LangChain.

Modules:

Name Description
embeddings

Nomic partner integration for LangChain.

Classes:

Name Description
NomicEmbeddings

NomicEmbeddings embedding model.

NomicEmbeddings

Bases: Embeddings

NomicEmbeddings embedding model.

Example:

.. code-block:: python

    from langchain_nomic import NomicEmbeddings

    model = NomicEmbeddings()

Methods:

Name Description
aembed_documents

Asynchronous Embed search docs.

aembed_query

Asynchronous Embed query text.

__init__

Initialize NomicEmbeddings model.

embed

Embed texts.

embed_documents

Embed search docs.

embed_query

Embed query text.

embed_image

Embed images.

aembed_documents async

aembed_documents(texts: list[str]) -> list[list[float]]

Asynchronous Embed search docs.

Parameters:

Name Type Description Default
texts list[str]

List of text to embed.

required

Returns:

Type Description
list[list[float]]

List of embeddings.

aembed_query async

aembed_query(text: str) -> list[float]

Asynchronous Embed query text.

Parameters:

Name Type Description Default
text str

Text to embed.

required

Returns:

Type Description
list[float]

Embedding.

__init__

__init__(
    *,
    model: str,
    nomic_api_key: Optional[str] = None,
    dimensionality: Optional[int] = None,
    inference_mode: str = "remote",
    device: Optional[str] = None,
    vision_model: Optional[str] = None
)

Initialize NomicEmbeddings model.

Parameters:

Name Type Description Default
model str

model name

required
nomic_api_key Optional[str]

optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default.

None
dimensionality Optional[int]

The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.

None
inference_mode str

How to generate embeddings. One of 'remote', 'local' (Embed4All), or 'dynamic' (automatic). Defaults to 'remote'.

'remote'
device Optional[str]

The device to use for local embeddings. Choices include 'cpu', 'gpu', 'nvidia', 'amd', or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to 'cpu'. Do not use on macOS.

None
vision_model Optional[str]

The vision model to use for image embeddings.

None

embed

embed(
    texts: list[str], *, task_type: str
) -> list[list[float]]

Embed texts.

Parameters:

Name Type Description Default
texts list[str]

list of texts to embed

required
task_type str

the task type to use when embedding. One of 'search_query', 'search_document', 'classification', 'clustering'

required

embed_documents

embed_documents(texts: list[str]) -> list[list[float]]

Embed search docs.

Parameters:

Name Type Description Default
texts list[str]

list of texts to embed as documents

required

embed_query

embed_query(text: str) -> list[float]

Embed query text.

Parameters:

Name Type Description Default
text str

query text

required

embed_image

embed_image(uris: list[str]) -> list[list[float]]

Embed images.

Parameters:

Name Type Description Default
uris list[str]

list of image URIs to embed

required