Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual content, or self-harm, or managing custom blocklists.
Add this skill
npx mdskills install sickn33/azure-ai-contentsafety-tsComprehensive Azure content moderation guide with excellent examples and helper functions
1---2name: azure-ai-contentsafety-ts3description: Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual content, or self-harm, or managing custom blocklists.4package: "@azure-rest/ai-content-safety"5---67# Azure AI Content Safety REST SDK for TypeScript89Analyze text and images for harmful content with customizable blocklists.1011## Installation1213```bash14npm install @azure-rest/ai-content-safety @azure/identity @azure/core-auth15```1617## Environment Variables1819```bash20CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com21CONTENT_SAFETY_KEY=<api-key>22```2324## Authentication2526**Important**: This is a REST client. `ContentSafetyClient` is a **function**, not a class.2728### API Key2930```typescript31import ContentSafetyClient from "@azure-rest/ai-content-safety";32import { AzureKeyCredential } from "@azure/core-auth";3334const client = ContentSafetyClient(35 process.env.CONTENT_SAFETY_ENDPOINT!,36 new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)37);38```3940### DefaultAzureCredential4142```typescript43import ContentSafetyClient from "@azure-rest/ai-content-safety";44import { DefaultAzureCredential } from "@azure/identity";4546const client = ContentSafetyClient(47 process.env.CONTENT_SAFETY_ENDPOINT!,48 new DefaultAzureCredential()49);50```5152## Analyze Text5354```typescript55import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";5657const result = await client.path("/text:analyze").post({58 body: {59 text: "Text content to analyze",60 categories: ["Hate", "Sexual", "Violence", "SelfHarm"],61 outputType: "FourSeverityLevels" // or "EightSeverityLevels"62 }63});6465if (isUnexpected(result)) {66 throw result.body;67}6869for (const analysis of result.body.categoriesAnalysis) {70 console.log(`${analysis.category}: severity ${analysis.severity}`);71}72```7374## Analyze Image7576### Base64 Content7778```typescript79import { readFileSync } from "node:fs";8081const imageBuffer = readFileSync("./image.png");82const base64Image = imageBuffer.toString("base64");8384const result = await client.path("/image:analyze").post({85 body: {86 image: { content: base64Image }87 }88});8990if (isUnexpected(result)) {91 throw result.body;92}9394for (const analysis of result.body.categoriesAnalysis) {95 console.log(`${analysis.category}: severity ${analysis.severity}`);96}97```9899### Blob URL100101```typescript102const result = await client.path("/image:analyze").post({103 body: {104 image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }105 }106});107```108109## Blocklist Management110111### Create Blocklist112113```typescript114const result = await client115 .path("/text/blocklists/{blocklistName}", "my-blocklist")116 .patch({117 contentType: "application/merge-patch+json",118 body: {119 description: "Custom blocklist for prohibited terms"120 }121 });122123if (isUnexpected(result)) {124 throw result.body;125}126127console.log(`Created: ${result.body.blocklistName}`);128```129130### Add Items to Blocklist131132```typescript133const result = await client134 .path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")135 .post({136 body: {137 blocklistItems: [138 { text: "prohibited-term-1", description: "First blocked term" },139 { text: "prohibited-term-2", description: "Second blocked term" }140 ]141 }142 });143144if (isUnexpected(result)) {145 throw result.body;146}147148for (const item of result.body.blocklistItems ?? []) {149 console.log(`Added: ${item.blocklistItemId}`);150}151```152153### Analyze with Blocklist154155```typescript156const result = await client.path("/text:analyze").post({157 body: {158 text: "Text that might contain blocked terms",159 blocklistNames: ["my-blocklist"],160 haltOnBlocklistHit: false161 }162});163164if (isUnexpected(result)) {165 throw result.body;166}167168// Check blocklist matches169if (result.body.blocklistsMatch) {170 for (const match of result.body.blocklistsMatch) {171 console.log(`Blocked: "${match.blocklistItemText}" from ${match.blocklistName}`);172 }173}174```175176### List Blocklists177178```typescript179const result = await client.path("/text/blocklists").get();180181if (isUnexpected(result)) {182 throw result.body;183}184185for (const blocklist of result.body.value ?? []) {186 console.log(`${blocklist.blocklistName}: ${blocklist.description}`);187}188```189190### Delete Blocklist191192```typescript193await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();194```195196## Harm Categories197198| Category | API Term | Description |199|----------|----------|-------------|200| Hate and Fairness | `Hate` | Discriminatory language targeting identity groups |201| Sexual | `Sexual` | Sexual content, nudity, pornography |202| Violence | `Violence` | Physical harm, weapons, terrorism |203| Self-Harm | `SelfHarm` | Self-injury, suicide, eating disorders |204205## Severity Levels206207| Level | Risk | Recommended Action |208|-------|------|-------------------|209| 0 | Safe | Allow |210| 2 | Low | Review or allow with warning |211| 4 | Medium | Block or require human review |212| 6 | High | Block immediately |213214**Output Types**:215- `FourSeverityLevels` (default): Returns 0, 2, 4, 6216- `EightSeverityLevels`: Returns 0-7217218## Content Moderation Helper219220```typescript221import ContentSafetyClient, {222 isUnexpected,223 TextCategoriesAnalysisOutput224} from "@azure-rest/ai-content-safety";225226interface ModerationResult {227 isAllowed: boolean;228 flaggedCategories: string[];229 maxSeverity: number;230 blocklistMatches: string[];231}232233async function moderateContent(234 client: ReturnType<typeof ContentSafetyClient>,235 text: string,236 maxAllowedSeverity = 2,237 blocklistNames: string[] = []238): Promise<ModerationResult> {239 const result = await client.path("/text:analyze").post({240 body: { text, blocklistNames, haltOnBlocklistHit: false }241 });242243 if (isUnexpected(result)) {244 throw result.body;245 }246247 const flaggedCategories = result.body.categoriesAnalysis248 .filter(c => (c.severity ?? 0) > maxAllowedSeverity)249 .map(c => c.category!);250251 const maxSeverity = Math.max(252 ...result.body.categoriesAnalysis.map(c => c.severity ?? 0)253 );254255 const blocklistMatches = (result.body.blocklistsMatch ?? [])256 .map(m => m.blocklistItemText!);257258 return {259 isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,260 flaggedCategories,261 maxSeverity,262 blocklistMatches263 };264}265```266267## API Endpoints268269| Operation | Method | Path |270|-----------|--------|------|271| Analyze Text | POST | `/text:analyze` |272| Analyze Image | POST | `/image:analyze` |273| Create/Update Blocklist | PATCH | `/text/blocklists/{blocklistName}` |274| List Blocklists | GET | `/text/blocklists` |275| Delete Blocklist | DELETE | `/text/blocklists/{blocklistName}` |276| Add Blocklist Items | POST | `/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems` |277| List Blocklist Items | GET | `/text/blocklists/{blocklistName}/blocklistItems` |278| Remove Blocklist Items | POST | `/text/blocklists/{blocklistName}:removeBlocklistItems` |279280## Key Types281282```typescript283import ContentSafetyClient, {284 isUnexpected,285 AnalyzeTextParameters,286 AnalyzeImageParameters,287 TextCategoriesAnalysisOutput,288 ImageCategoriesAnalysisOutput,289 TextBlocklist,290 TextBlocklistItem291} from "@azure-rest/ai-content-safety";292```293294## Best Practices2952961. **Always use isUnexpected()** - Type guard for error handling2972. **Set appropriate thresholds** - Different categories may need different severity thresholds2983. **Use blocklists for domain-specific terms** - Supplement AI detection with custom rules2994. **Log moderation decisions** - Keep audit trail for compliance3005. **Handle edge cases** - Empty text, very long text, unsupported image formats301
Full transparency — inspect the skill content before installing.