This package contains functionality from LangChain v0.x that has been moved out of the main langchain package as part of the v1.0 release. It exists to provide backward compatibility for existing applications while the core langchain package focuses on the essential building blocks for modern agent development.
Use @langchain/classic if you:
LLMChain, ConversationalRetrievalQAChain, RetrievalQAChain)@langchain/community that was previously re-exported from langchaincreateAgent APIFor new projects, use langchain v1.0 instead. The new APIs provide:
createAgent: A cleaner, more powerful way to build agents with middleware supportSee the LangChain v1.0 release notes for more information.
npm install @langchain/classic
This package requires @langchain/core as a peer dependency:
npm install @langchain/core
All chain implementations from v0.x, including:
LLMChain - Basic chain for calling an LLM with a prompt templateConversationalRetrievalQAChain - Chain for conversational question-answering over documentsRetrievalQAChain - Chain for question-answering over documents without conversation memoryStuffDocumentsChain - Chain for stuffing documents into a promptMapReduceDocumentsChain - Chain for map-reduce operations over documentsRefineDocumentsChain - Chain for iterative refinement over documentsThe RecordManager and related indexing functionality for managing document updates in vector stores.
Re-exports from @langchain/community that were previously available in the main langchain package.
Various utilities and abstractions that have been replaced by better alternatives in v1.0.
langchain v0.x to @langchain/classicIf you're upgrading to langchain v1.0 but want to keep using legacy functionality:
Install @langchain/classic:
npm install @langchain/classic
Update your imports:
// Before (v0.x)
import { LLMChain } from "langchain/chains";
import { ConversationalRetrievalQAChain } from "langchain/chains";
// After (v1.0)
import { LLMChain } from "@langchain/classic/chains";
import { ConversationalRetrievalQAChain } from "@langchain/classic/chains";
Or if you imported from the root:
// Before (v0.x)
import { LLMChain } from "langchain";
// After (v1.0)
import { LLMChain } from "@langchain/classic";
@langchain/classic to langchain v1.0For new development, we recommend using createAgent instead of legacy chains.
Example migration from LLMChain:
// Before (using LLMChain)
import { LLMChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({ model: "gpt-4" });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const result = await chain.call({ product: "colorful socks" });
// After (using createAgent)
import { createAgent } from "langchain";
const agent = createAgent({
model: "openai:gpt-4",
systemPrompt: "You are a creative assistant that helps name companies.",
});
const result = await agent.invoke({
messages: [
{
role: "user",
content: "What is a good name for a company that makes colorful socks?",
},
],
});
For more complex migrations, see the migration guide.
@langchain/classic will receive:
langchain v1.0 APIsThis package is in maintenance mode. For new features and active development, use langchain v1.0.
import { LLMChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({ model: "gpt-4" });
const prompt = PromptTemplate.fromTemplate(
"Tell me a {adjective} joke about {content}."
);
const chain = new LLMChain({ llm: model, prompt });
const result = await chain.call({
adjective: "funny",
content: "chickens",
});
console.log(result.text);
import { ConversationalRetrievalQAChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
// Create vector store with documents
const vectorStore = await MemoryVectorStore.fromTexts(
["Document 1 text...", "Document 2 text..."],
[{ id: 1 }, { id: 2 }],
new OpenAIEmbeddings()
);
// Create chain
const model = new ChatOpenAI({ model: "gpt-4" });
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever()
);
// Use chain
const result = await chain.call({
question: "What is in the documents?",
chat_history: [],
});
console.log(result.text);
For bug reports and issues, please open an issue on GitHub.
For questions and discussions, join our Discord community.
This package is licensed under the MIT License. See the LICENSE file for details.