Moderate content using OpenAI's Moderation API.
This method checks whether content violates OpenAI's content policy by analyzing text for categories such as hate, harassment, self-harm, sexual content, violence, and more.
moderateContent(input: string | string[], params: __type): Promise<ModerationCreateResponse>| Name | Type | Description |
|---|---|---|
input* | string | string[] | The text or array of texts to moderate |
params | __type | Optional parameters for the moderation request |
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
// Moderate a single text
const result = await model.moderateContent("This is a test message");
console.log(result.results[0].flagged); // false
console.log(result.results[0].categories); // { hate: false, harassment: false, ... }
// Moderate multiple texts
const results = await model.moderateContent([
"Hello, how are you?",
"This is inappropriate content"
]);
results.results.forEach((result, index) => {
console.log(`Text ${index + 1} flagged:`, result.flagged);
});
// Use a specific moderation model
const stableResult = await model.moderateContent(
"Test content",
{ model: "omni-moderation-latest" }
);