LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsfake_chat_modelsGenericFakeChatModel
    Class●Since v0.1

    GenericFakeChatModel

    Generic fake chat model that can be used to test the chat model interface.

    • Chat model should be usable in both sync and async tests
    • Invokes on_llm_new_token to allow for testing of callback related code for new tokens.
    • Includes logic to break messages into message chunk to facilitate testing of streaming.
    Copy
    GenericFakeChatModel(
        self,
        *args: Any = (),
        **kwargs: Any = {},
    )

    Bases

    BaseChatModel

    Used in Docs

    • Test

    Attributes

    attribute
    messages: Iterator[AIMessage | str]

    Get an iterator over messages.

    This can be expanded to accept other types like Callables / dicts / strings to make the interface more generic if needed.

    Note

    if you want to pass a list, you can use iter to convert it to an iterator.

    Warning

    Streaming is not implemented yet. We should try to implement it in the future by delegating to invoke and then breaking the resulting output into message chunks.

    Inherited fromBaseChatModel

    Attributes

    Arate_limiter: BaseRateLimiter | None
    —

    An optional rate limiter to use for limiting the number of requests.

    Adisable_streaming: bool | Literal['tool_calling']
    —

    Whether to disable streaming for this model.

    Aoutput_version: str | None
    —

    Version of AIMessage output format to store in message content.

    Aprofile: ModelProfile | None
    —

    Profile detailing model capabilities.

    Amodel_configAOutputType: Any

    Methods

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MstreamMastreamMgenerate
    —

    Pass a sequence of prompts to a model and return generations.

    Magenerate
    —

    Asynchronously pass a sequence of prompts to a model and return generations.

    Mgenerate_promptMagenerate_promptMdict
    —

    Return dictionary representation of output parser.

    Mbind_tools
    —

    Bind tools to the model.

    Mwith_structured_output
    —

    Model wrapper that returns outputs formatted to match the given schema.

    Inherited fromBaseLanguageModel

    Attributes

    Acache: BaseCache | bool | None
    —

    Whether to cache the response.

    Averbose: bool
    —

    Whether to log the tool's progress.

    Acallbacks: Callbacks
    —

    Callbacks for this call and any sub-calls (e.g. a Chain calling an LLM).

    Atags: list[str] | None
    —

    Optional list of tags associated with the retriever.

    Ametadata: dict[str, Any] | None
    —

    Optional metadata associated with the retriever.

    Acustom_get_token_ids: Callable[[str], list[int]] | None
    —

    Optional encoder to use for counting tokens.

    Amodel_configAInputType: Any

    Methods

    Mset_verbose
    —

    If verbose is None, set it.

    Mgenerate_promptMagenerate_promptMwith_structured_output
    —

    Model wrapper that returns outputs formatted to match the given schema.

    Mget_token_ids
    —

    Return the ordered IDs of the tokens in a text.

    Mget_num_tokens
    —

    Get the number of tokens present in the text.

    Mget_num_tokens_from_messages
    —

    Get the number of tokens in the messages.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Amodel_config

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AInputType: AnyAOutputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schemaMget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub