LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
Pythonlangsmithwrappers_geminiwrap_gemini
Function●Since v0.4

wrap_gemini

Patch the Google Gen AI client to make it traceable.

Warning

BETA: This wrapper is in beta.

Copy
wrap_gemini(
  client: C,
  *,
  tracing_extra: Optional[TracingExtra] = None,
  chat_name: str = 'ChatGoogleGenerativeAI'
) -> C

Supports:

  • generate_content and generate_content_stream methods
  • Sync and async clients
  • Streaming and non-streaming responses
  • Tool/function calling with proper UI rendering
  • Multimodal inputs (text + images)
  • Image generation with inline_data support
  • Token usage tracking including reasoning tokens

Example:

from google import genai
from google.genai import types
from langsmith import wrappers

# Use Google Gen AI client same as you normally would.
client = wrappers.wrap_gemini(genai.Client(api_key="your-api-key"))

# Basic text generation:
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Why is the sky blue?",
)
print(response.text)

# Streaming:
for chunk in client.models.generate_content_stream(
    model="gemini-2.5-flash",
    contents="Tell me a story",
):
    print(chunk.text, end="")

# Tool/Function calling:
schedule_meeting_function = {
    "name": "schedule_meeting",
    "description": "Schedules a meeting with specified attendees.",
    "parameters": {
        "type": "object",
        "properties": {
            "attendees": {"type": "array", "items": {"type": "string"}},
            "date": {"type": "string"},
            "time": {"type": "string"},
            "topic": {"type": "string"},
        },
        "required": ["attendees", "date", "time", "topic"],
    },
}

tools = types.Tool(function_declarations=[schedule_meeting_function])
config = types.GenerateContentConfig(tools=[tools])

response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Schedule a meeting with Bob and Alice tomorrow at 2 PM.",
    config=config,
)

# Image generation:
response = client.models.generate_content(
    model="gemini-2.5-flash-image",
    contents=["Create a picture of a futuristic city"],
)

# Save generated image
from io import BytesIO
from PIL import Image

for part in response.candidates[0].content.parts:
    if part.inline_data is not None:
        image = Image.open(BytesIO(part.inline_data.data))
        image.save("generated_image.png")

Initial beta release of Google Gemini wrapper.

Parameters

NameTypeDescription
client*C

The Google Gen AI client to patch.

tracing_extraOptional[TracingExtra]
Default:None

Extra tracing information.

chat_namestr
Default:'ChatGoogleGenerativeAI'

The run name for the chat endpoint.

View source on GitHub