langchain.js
    Preparing search index...

    Interface FakeStreamingChatModelFields

    Interface for the Constructor-field specific to the Fake Streaming Chat model (all optional because we fill in defaults).

    interface FakeStreamingChatModelFields {
        cache?: boolean | BaseCache<Generation[]>;
        callbackManager?: CallbackManager;
        callbacks?: Callbacks;
        chunks?: AIMessageChunk<MessageStructure>[];
        disableStreaming?: boolean;
        maxConcurrency?: number;
        maxRetries?: number;
        metadata?: Record<string, unknown>;
        onFailedAttempt?: FailedAttemptHandler;
        outputVersion?: "v1" | "v0";
        responses?: BaseMessage<MessageStructure, MessageType>[];
        sleep?: number;
        tags?: string[];
        thrownErrorString?: string;
        toolStyle?: "openai" | "anthropic" | "bedrock" | "google";
        verbose?: boolean;
    }

    Hierarchy (View Summary)

    Index

    Properties

    cache?: boolean | BaseCache<Generation[]>
    callbackManager?: CallbackManager

    Use callbacks instead

    callbacks?: Callbacks

    Exact chunks to emit (can include tool-call deltas)

    disableStreaming?: boolean

    Whether to disable streaming.

    If streaming is bypassed, then stream() will defer to invoke().

    • If true, will always bypass streaming case.
    • If false (default), will always use streaming case if available.
    maxConcurrency?: number

    The maximum number of concurrent calls that can be made. Defaults to Infinity, which means no limit.

    maxRetries?: number

    The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.

    metadata?: Record<string, unknown>
    onFailedAttempt?: FailedAttemptHandler

    Custom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.

    outputVersion?: "v1" | "v0"

    Version of AIMessage output format to store in message content.

    AIMessage.contentBlocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format as the message content, e.g., for serialization purposes.

    • "v0": provider-specific format in content (can lazily parse with .contentBlocks)
    • "v1": standardized format in content (consistent with .contentBlocks)

    You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to enable this by default.

    "v0"
    

    Full AI messages to fall back to when no chunks supplied

    sleep?: number

    Milliseconds to pause between fallback char-by-char chunks

    tags?: string[]
    thrownErrorString?: string

    Throw this error instead of streaming (useful in tests)

    toolStyle?: "openai" | "anthropic" | "bedrock" | "google"

    How tool specs are formatted in bindTools

    verbose?: boolean