Parse the output of an LLM call using a wrapped parser.
aparse_with_prompt(
self,
completion: str,
prompt_value: PromptValue
) -> T| Name | Type | Description |
|---|---|---|
completion* | str | The chain completion to parse. |
prompt_value* | PromptValue | The prompt to use to parse the completion. |