LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/corelanguage_modelsbase
Moduleā—Since v1.0

language_models/base

Copy
import { ... } from "@langchain/core/language_models/base";

Functions

function
calculateMaxTokens→ Promise<number>
function
getEmbeddingContextSize→ number
function
getModelContextSize→ number

Get the context window size (max input tokens) for a given model.

Context window sizes are sourced from official model documentation:

  • OpenAI: https://platform.openai.com/docs/models
  • Anthropic: https://docs.anthropic.com/claude/docs/models-overview
  • Google: https://ai.google.dev/gemini/docs/models/gemini
function
getModelNameForTiktoken→ TiktokenModel
function
isOpenAITool→ tool is ToolDefinition

Whether or not the input matches the OpenAI tool definition.

Classes

class
BaseLangChain

Base class for language models, chains, tools.

class
BaseLanguageModel

Base class for language models.

Interfaces

interface
BaseFunctionCallOptions
interface
BaseLangChainParams
interface
BaseLanguageModelCallOptions
interface
BaseLanguageModelInterface

Base interface implemented by all runnables. Used for cross-compatibility between different versions of LangChain core.

Should not change on patch releases.

interface
BaseLanguageModelParams

Base interface for language model parameters. A subclass of BaseLanguageModel should have a constructor that takes in a parameter that extends this interface.

interface
BaseLanguageModelTracingCallOptions
interface
FunctionDefinition
interface
TokenUsage

Shared interface for token usage return type from LLM calls.

interface
ToolDefinition

Type Aliases

typeAlias
BaseLanguageModelInput: BasePromptValueInterface | string | BaseMessageLike[]
typeAlias
FunctionCallOption
typeAlias
LanguageModelLike: Runnable<BaseLanguageModelInput, LanguageModelOutput>
typeAlias
LanguageModelOutput: BaseMessage | string
typeAlias
SerializedLLM: __type & Record<string, any>
typeAlias
StructuredOutputMethodOptions
typeAlias
StructuredOutputType: InferInteropZodOutput<InteropZodObject>
deprecatedtypeAlias
StructuredOutputMethodParams
View source on GitHub