LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithclientClientgenerate_insights
Method●Since v0.4

generate_insights

Generate Insights over your agent chat histories.

Note
  • Only available to Plus and higher tier LangSmith users.
  • Insights Agent uses user's model API key. The cost of the report grows linearly with the number of chat histories you upload and the size of each history. For more see insights.
  • This method will upload your chat histories as traces to LangSmith.
  • If you pass in a model API key this will be set as a workspace secret meaning it will be usedin for evaluators and the playground.
Copy
generate_insights(
  self,
  *,
  chat_histories: list[list[dict]],
  instructions: str = DEFAULT_INSTRUCTIONS,
  name: str | None = None,
  model: Literal['openai', 'anthropic'] | None = None,
  openai_api_key: str | None = None,
  anthropic_api_key: str | None = None
) -> ls_schemas.InsightsReport

Example:

import os
from langsmith import Client

client = client()

chat_histories = [
    [
        {"role": "user", "content": "how are you"},
        {"role": "assistant", "content": "good!"},
    ],
    [
        {"role": "user", "content": "do you like art"},
        {"role": "assistant", "content": "only Tarkovsky"},
    ],
]

report = client.generate_insights(
    chat_histories=chat_histories,
    name="Conversation Topics",
    instructions="What are the high-level topics of conversations users are having with the assistant?",
    openai_api_key=os.environ["OPENAI_API_KEY"],
)

# client.poll_insights(report=report)

Parameters

NameTypeDescription
chat_histories*list[list[dict]]

A list of chat histories. Each chat history should be a list of messages. We recommend formatting these as OpenAI messages with a "role" and "content" key. Max length 1000 items.

instructionsstr
Default:DEFAULT_INSTRUCTIONS

Instructions for the Insights agent. Should focus on what your agent does and what types of insights you want to generate.

namestr | None
Default:None

Name for the generated Insights report.

modelLiteral['openai', 'anthropic'] | None
Default:None

Whether to use OpenAI or Anthropic models. This will impact the cost of generating the Insights Report.

openai_api_keystr | None
Default:None

OpenAI API key to use. Only needed if you have not already stored this in LangSmith as a workspace secret.

anthropic_api_keystr | None
Default:None

Anthropic API key to use. Only needed if you have not already stored this in LangSmith as a workspace secret.

View source on GitHub