Add this skill
npx mdskills install sickn33/azure-ai-contentsafety-pyComprehensive SDK documentation with excellent examples, but lacks actionable agent instructions
1---2name: azure-ai-contentsafety-py3description: |4 Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.5 Triggers: "azure-ai-contentsafety", "ContentSafetyClient", "content moderation", "harmful content", "text analysis", "image analysis".6package: azure-ai-contentsafety7---89# Azure AI Content Safety SDK for Python1011Detect harmful user-generated and AI-generated content in applications.1213## Installation1415```bash16pip install azure-ai-contentsafety17```1819## Environment Variables2021```bash22CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com23CONTENT_SAFETY_KEY=<your-api-key>24```2526## Authentication2728### API Key2930```python31from azure.ai.contentsafety import ContentSafetyClient32from azure.core.credentials import AzureKeyCredential33import os3435client = ContentSafetyClient(36 endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],37 credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])38)39```4041### Entra ID4243```python44from azure.ai.contentsafety import ContentSafetyClient45from azure.identity import DefaultAzureCredential4647client = ContentSafetyClient(48 endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],49 credential=DefaultAzureCredential()50)51```5253## Analyze Text5455```python56from azure.ai.contentsafety import ContentSafetyClient57from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory58from azure.core.credentials import AzureKeyCredential5960client = ContentSafetyClient(endpoint, AzureKeyCredential(key))6162request = AnalyzeTextOptions(text="Your text content to analyze")63response = client.analyze_text(request)6465# Check each category66for category in [TextCategory.HATE, TextCategory.SELF_HARM,67 TextCategory.SEXUAL, TextCategory.VIOLENCE]:68 result = next((r for r in response.categories_analysis69 if r.category == category), None)70 if result:71 print(f"{category}: severity {result.severity}")72```7374## Analyze Image7576```python77from azure.ai.contentsafety import ContentSafetyClient78from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData79from azure.core.credentials import AzureKeyCredential80import base648182client = ContentSafetyClient(endpoint, AzureKeyCredential(key))8384# From file85with open("image.jpg", "rb") as f:86 image_data = base64.b64encode(f.read()).decode("utf-8")8788request = AnalyzeImageOptions(89 image=ImageData(content=image_data)90)9192response = client.analyze_image(request)9394for result in response.categories_analysis:95 print(f"{result.category}: severity {result.severity}")96```9798### Image from URL99100```python101from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData102103request = AnalyzeImageOptions(104 image=ImageData(blob_url="https://example.com/image.jpg")105)106107response = client.analyze_image(request)108```109110## Text Blocklist Management111112### Create Blocklist113114```python115from azure.ai.contentsafety import BlocklistClient116from azure.ai.contentsafety.models import TextBlocklist117from azure.core.credentials import AzureKeyCredential118119blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))120121blocklist = TextBlocklist(122 blocklist_name="my-blocklist",123 description="Custom terms to block"124)125126result = blocklist_client.create_or_update_text_blocklist(127 blocklist_name="my-blocklist",128 options=blocklist129)130```131132### Add Block Items133134```python135from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem136137items = AddOrUpdateTextBlocklistItemsOptions(138 blocklist_items=[139 TextBlocklistItem(text="blocked-term-1"),140 TextBlocklistItem(text="blocked-term-2")141 ]142)143144result = blocklist_client.add_or_update_blocklist_items(145 blocklist_name="my-blocklist",146 options=items147)148```149150### Analyze with Blocklist151152```python153from azure.ai.contentsafety.models import AnalyzeTextOptions154155request = AnalyzeTextOptions(156 text="Text containing blocked-term-1",157 blocklist_names=["my-blocklist"],158 halt_on_blocklist_hit=True159)160161response = client.analyze_text(request)162163if response.blocklists_match:164 for match in response.blocklists_match:165 print(f"Blocked: {match.blocklist_item_text}")166```167168## Severity Levels169170Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):171172```python173from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType174175request = AnalyzeTextOptions(176 text="Your text",177 output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS178)179```180181## Harm Categories182183| Category | Description |184|----------|-------------|185| `Hate` | Attacks based on identity (race, religion, gender, etc.) |186| `Sexual` | Sexual content, relationships, anatomy |187| `Violence` | Physical harm, weapons, injury |188| `SelfHarm` | Self-injury, suicide, eating disorders |189190## Severity Scale191192| Level | Text Range | Image Range | Meaning |193|-------|------------|-------------|---------|194| 0 | Safe | Safe | No harmful content |195| 2 | Low | Low | Mild references |196| 4 | Medium | Medium | Moderate content |197| 6 | High | High | Severe content |198199## Client Types200201| Client | Purpose |202|--------|---------|203| `ContentSafetyClient` | Analyze text and images |204| `BlocklistClient` | Manage custom blocklists |205206## Best Practices2072081. **Use blocklists** for domain-specific terms2092. **Set severity thresholds** appropriate for your use case2103. **Handle multiple categories** — content can be harmful in multiple ways2114. **Use halt_on_blocklist_hit** for immediate rejection2125. **Log analysis results** for audit and improvement2136. **Consider 8-severity mode** for finer-grained control2147. **Pre-moderate AI outputs** before showing to users215
Full transparency — inspect the skill content before installing.