langchain.js
    Preparing search index...
    // Initialize LlamaCppEmbeddings with the path to the model file
    const embeddings = await LlamaCppEmbeddings.initialize({
    modelPath: llamaPath,
    });

    // Embed a query string using the Llama embeddings
    const res = embeddings.embedQuery("Hello Llama!");

    // Output the resulting embeddings
    console.log(res);

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    _context: LlamaContext
    _model: LlamaModel

    Methods

    • Generates embeddings for an array of texts.

      Parameters

      • texts: string[]

        An array of strings to generate embeddings for.

      Returns Promise<number[][]>

      A Promise that resolves to an array of embeddings.

    • Generates an embedding for a single text.

      Parameters

      • text: string

        A string to generate an embedding for.

      Returns Promise<number[]>

      A Promise that resolves to an array of numbers representing the embedding.

    • Initializes the llama_cpp model for usage in the embeddings wrapper.

      Parameters

      • inputs: LlamaBaseCppInputs

        the inputs passed onto the model.

      Returns Promise<LlamaCppEmbeddings>

      A Promise that resolves to the LlamaCppEmbeddings type class.