LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithclientClientcreate_feedback
Method●Since v0.0

create_feedback

Create feedback for a run.

Note

To enable feedback to be batch uploaded in the background you must specify trace_id. We highly encourage this for latency-sensitive environments.

Copy
create_feedback(
  self,
  run_id: Optional[ID_TYPE] = None,
  key: str = 'unnamed',
  *,
  score: Union[float, int, bool, None] = None,
  value: Union[float, int, bool, str, dict, None] = None,
  trace_id: Optional[ID_TYPE] = None,
  correction: Union[dict, None] = None,
  comment: Union[str, None] = None,
  source_info: Optional[dict[str, Any]] = None,
  feedback_source_type: Union[ls_schemas.FeedbackSourceType, str] = ls_schemas.FeedbackSourceType.API,
  source_run_id: Optional[ID_TYPE] = None,
  feedback_id: Optional[ID_TYPE] = None,
  feedback_config: Optional[ls_schemas.FeedbackConfig] = None,
  stop_after_attempt: int = 10,
  project_id: Optional[ID_TYPE] = None,
  comparative_experiment_id: Optional[ID_TYPE] = None,
  feedback_group_id: Optional[ID_TYPE] = None,
  extra: Optional[dict] = None,
  error: Optional[bool] = None,
  session_id: Optional[ID_TYPE] = None,
  start_time: Optional[datetime.datetime] = None,
  **kwargs: Any = {}
) -> ls_schemas.Feedback

Example:

from langsmith import trace, traceable, Client

@traceable
def foo(x):
    return {"y": x * 2}

@traceable
def bar(y):
    return {"z": y - 1}

client = Client()

inputs = {"x": 1}
with trace(name="foobar", inputs=inputs) as root_run:
    result = foo(**inputs)
    result = bar(**result)
    root_run.outputs = result
    trace_id = root_run.id
    child_runs = root_run.child_runs

# Provide feedback for a trace (a.k.a. a root run)
client.create_feedback(
    key="user_feedback",
    score=1,
    trace_id=trace_id,
)

# Provide feedback for a child run
foo_run_id = [run for run in child_runs if run.name == "foo"][0].id
client.create_feedback(
    key="correctness",
    score=0,
    run_id=foo_run_id,
    # trace_id= is optional but recommended to enable batched and backgrounded
    # feedback ingestion.
    trace_id=trace_id,
)

Parameters

NameTypeDescription
keystr
Default:'unnamed'

The name of the feedback metric.

scoreOptional[Union[float, int, bool]]
Default:None

The score to rate this run on the metric or aspect.

valueOptional[Union[float, int, bool, str, dict]]
Default:None

The display value or non-numeric value for this feedback.

run_idOptional[Union[UUID, str]]
Default:None

The ID of the run to provide feedback for. At least one of run_id, trace_id, or project_id must be specified.

trace_idOptional[Union[UUID, str]]
Default:None

The ID of the trace (i.e. root parent run) of the run to provide feedback for (specified by run_id). If run_id and trace_id are the same, only trace_id needs to be specified. NOTE: trace_id is required feedback ingestion to be batched and backgrounded.

correctionOptional[dict]
Default:None

The proper ground truth for this run.

commentOptional[str]
Default:None

A comment about this feedback, such as a justification for the score or chain-of-thought trajectory for an LLM judge.

source_infoOptional[Dict[str, Any]]
Default:None

Information about the source of this feedback.

feedback_source_typeUnion[FeedbackSourceType, str]
Default:ls_schemas.FeedbackSourceType.API

The type of feedback source, such as model (for model-generated feedback) or API.

source_run_idOptional[Union[UUID, str]]
Default:None

The ID of the run that generated this feedback, if a "model" type.

feedback_idOptional[Union[UUID, str]]
Default:None

The ID of the feedback to create. If not provided, a random UUID will be generated.

feedback_configOptional[FeedbackConfig]
Default:None

The configuration specifying how to interpret feedback with this key. Examples include continuous (with min/max bounds), categorical, or freeform.

stop_after_attemptint, default=10
Default:10

The number of times to retry the request before giving up.

project_idOptional[Union[UUID, str]]
Default:None

The ID of the project (or experiment) to provide feedback on. This is used for creating summary metrics for experiments. Cannot specify run_id or trace_id if project_id is specified, and vice versa.

comparative_experiment_idOptional[Union[UUID, str]]
Default:None

If this feedback was logged as a part of a comparative experiment, this associates the feedback with that experiment.

feedback_group_idOptional[Union[UUID, str]]
Default:None

When logging preferences, ranking runs, or other comparative feedback, this is used to group feedback together.

extraOptional[Dict]
Default:None

Metadata for the feedback.

session_idOptional[Union[UUID, str]]
Default:None

The session (project) ID of the run this feedback is for. Used to optimize feedback ingestion by avoiding server-side lookups.

start_timeOptional[datetime]
Default:None

The start time of the run this feedback is for. Used to optimize feedback ingestion by avoiding server-side lookups.

**kwargsAny
Default:{}

Additional keyword arguments.

View source on GitHub