Create feedback for a run.
create_feedback(
self,
run_id: Optional[ls_client.ID_TYPE],
key: str,
score: Optional[float] = None,
value: Union[float, int, bool, str, dict, None] = None,
comment: Optional[str] = None,
**kwargs: Any = {}
) -> ls_schemas.Feedback| Name | Type | Description |
|---|---|---|
run_id* | Optional[ls_client.ID_TYPE] | The ID of the run to provide feedback for. Can be |
key* | str | The name of the metric or aspect this feedback is about. |
score | Optional[float] | Default: NoneThe score to rate this run on the metric or aspect. |
value | Union[float, int, bool, str, dict, None] | Default: NoneThe display value or non-numeric value for this feedback. |
comment | Optional[str] | Default: NoneA comment about this feedback. |
**kwargs | Any | Default: {}Additional keyword arguments to include in the feedback data. |