Wrappers
wrappers
¶
This module provides convenient tracing wrappers for popular libraries.
| FUNCTION | DESCRIPTION |
|---|---|
wrap_anthropic |
Patch the Anthropic client to make it traceable. |
wrap_gemini |
Patch the Google Gen AI client to make it traceable. |
wrap_openai |
Patch the OpenAI client to make it traceable. |
wrap_anthropic
¶
Patch the Anthropic client to make it traceable.
| PARAMETER | DESCRIPTION |
|---|---|
client
|
The client to patch.
TYPE:
|
tracing_extra
|
Extra tracing information.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
C
|
The patched client. |
Example
import anthropic
from langsmith import wrappers
client = wrappers.wrap_anthropic(anthropic.Anthropic())
# Use Anthropic client same as you normally would:
system = "You are a helpful assistant."
messages = [
{
"role": "user",
"content": "What physics breakthroughs do you predict will happen by 2300?",
}
]
completion = client.messages.create(
model="claude-3-5-sonnet-latest",
messages=messages,
max_tokens=1000,
system=system,
)
print(completion.content)
# You can also use the streaming context manager:
with client.messages.stream(
model="claude-3-5-sonnet-latest",
messages=messages,
max_tokens=1000,
system=system,
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
message = stream.get_final_message()
wrap_gemini
¶
wrap_gemini(
client: C,
*,
tracing_extra: TracingExtra | None = None,
chat_name: str = "ChatGoogleGenerativeAI",
) -> C
Patch the Google Gen AI client to make it traceable.
Warning
BETA: This wrapper is in beta.
Supports
generate_contentandgenerate_content_streammethods- Sync and async clients
- Streaming and non-streaming responses
- Tool/function calling with proper UI rendering
- Multimodal inputs (text + images)
- Image generation with
inline_datasupport - Token usage tracking including reasoning tokens
| PARAMETER | DESCRIPTION |
|---|---|
client
|
The Google Gen AI client to patch.
TYPE:
|
tracing_extra
|
Extra tracing information.
TYPE:
|
chat_name
|
The run name for the chat endpoint.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
C
|
The patched client. |
Example
from google import genai
from google.genai import types
from langsmith import wrappers
# Use Google Gen AI client same as you normally would.
client = wrappers.wrap_gemini(genai.Client(api_key="your-api-key"))
# Basic text generation:
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Why is the sky blue?",
)
print(response.text)
# Streaming:
for chunk in client.models.generate_content_stream(
model="gemini-2.5-flash",
contents="Tell me a story",
):
print(chunk.text, end="")
# Tool/Function calling:
schedule_meeting_function = {
"name": "schedule_meeting",
"description": "Schedules a meeting with specified attendees.",
"parameters": {
"type": "object",
"properties": {
"attendees": {"type": "array", "items": {"type": "string"}},
"date": {"type": "string"},
"time": {"type": "string"},
"topic": {"type": "string"},
},
"required": ["attendees", "date", "time", "topic"],
},
}
tools = types.Tool(function_declarations=[schedule_meeting_function])
config = types.GenerateContentConfig(tools=[tools])
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Schedule a meeting with Bob and Alice tomorrow at 2 PM.",
config=config,
)
# Image generation:
response = client.models.generate_content(
model="gemini-2.5-flash-image",
contents=["Create a picture of a futuristic city"],
)
# Save generated image
from io import BytesIO
from PIL import Image
for part in response.candidates[0].content.parts:
if part.inline_data is not None:
image = Image.open(BytesIO(part.inline_data.data))
image.save("generated_image.png")
Added in langsmith 0.4.33
Initial beta release of Google Gemini wrapper.
wrap_openai
¶
wrap_openai(
client: C,
*,
tracing_extra: TracingExtra | None = None,
chat_name: str = "ChatOpenAI",
completions_name: str = "OpenAI",
) -> C
Patch the OpenAI client to make it traceable.
Supports
- Chat and Responses API's
- Sync and async OpenAI clients
createandparsemethods- With and without streaming
| PARAMETER | DESCRIPTION |
|---|---|
client
|
The client to patch.
TYPE:
|
tracing_extra
|
Extra tracing information.
TYPE:
|
chat_name
|
The run name for the chat completions endpoint.
TYPE:
|
completions_name
|
The run name for the completions endpoint.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
C
|
The patched client. |
Example
import openai
from langsmith import wrappers
# Use OpenAI client same as you normally would.
client = wrappers.wrap_openai(openai.OpenAI())
# Chat API:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "What physics breakthroughs do you predict will happen by 2300?",
},
]
completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
print(completion.choices[0].message.content)
# Responses API:
response = client.responses.create(
model="gpt-4o-mini",
messages=messages,
)
print(response.output_text)
Behavior changed in langsmith 0.3.16
Support for Responses API added.