Patch the OpenAI client to make it traceable.
wrap_openai(
client: C,
*,
tracing_extra: Optional[TracingExtra] = None,
chat_name: str = 'ChatOpenAI',
completions_name: str = 'OpenAI'
) -> CSupports:
create and parse methodswith_raw_response API for accessing HTTP headersExample:
import openai
from langsmith import wrappers
# Use OpenAI client same as you normally would.
client = wrappers.wrap_openai(openai.OpenAI())
# Chat API:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "What physics breakthroughs do you predict will happen by 2300?",
},
]
completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
print(completion.choices[0].message.content)
# Responses API:
response = client.responses.create(
model="gpt-4o-mini",
messages=messages,
)
print(response.output_text)
# With raw response to access headers:
raw_response = client.chat.completions.with_raw_response.create(
model="gpt-4o-mini", messages=messages
)
print(raw_response.headers) # Access HTTP headers
completion = raw_response.parse() # Get parsed response
langsmith 0.3.16Support for Responses API added.
langsmith 0.3.xSupport for with_raw_response API added.
| Name | Type | Description |
|---|---|---|
client* | C | The client to patch. |
tracing_extra | Optional[TracingExtra] | Default: NoneExtra tracing information. |
chat_name | str | Default: 'ChatOpenAI'The run name for the chat completions endpoint. |
completions_name | str | Default: 'OpenAI'The run name for the completions endpoint. |