Skip to content

What is Azure AI Content Safety?

Azure AI Content Safety (AACS) is an AI service in Azure to host the functionality of RAI platform. AACS is a GA service which is under Fedramp certificated and have strictly SLA availability commitment (99.9%) for all the first party and third party customers. Customers could create AACS resource in the portal or via Azure SDK programmally. It supported both key authentication and AAD authentication.

📺Key features (1P only)

👉️RAI policy management (1P)

Manage the RAI policy with standard RESTFUL interfaces.

👉️Analyze content with policy (1P)

Accept AOAI-like conversation structure request and annotate the harmful content in a contextual way, supporting multi-modality (text, image). For each request, customers could specify the policy to perform customized detection. The feature could be consume in HTTP way or in GRPC streaming way.

📺Key features (1P/3P)

👉️Content Moderation APIs (1P/3P)

Analyze text, image for all known harms, including Sexual, Violence, Self-harm, HateSpeech, Jailbreak, Cross-Domain Attack, IP leakage on code and text.

👉️Custom Category (1P/3P)

Build custom harmful content classifier with few samples to create a emergency mitigation. This feature will leverage GPT-like model to help augment data to build training dataset. Due to capacity limitation, this feature is only available in specific regions.

Feature Functionality Concepts guide Get started
Prompt Shields Scans text for the risk of a User input attack on a Large Language Model. Prompt Shields concepts Quickstart
Groundedness detection(preview) Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Groundedness detection concepts Quickstart
Protected material text detection Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Protected material concepts Quickstart
Protected material code detection Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Protected material concepts Quickstart
Custom categories (standard) API (preview) Lets you create and train your own custom content categories and scan text for matches. Custom categories concepts Quickstart
Custom categories (rapid) API (preview) Lets you define emerging harmful content patterns and scan text and images for matches. Custom categories concepts How-to guide
Analyze text API Scans text for sexual content, violence, hate, and self harm with multi-severity levels. Harm categories Quickstart
Analyze image API Scans images for sexual content, violence, hate, and self harm with multi-severity levels. Harm categories Quickstart

🌐Language support

For the language support of Responsible AI team models, please refer to the Language support. For other models, please refer to the table below.

💻Region availability

To use the Content Safety APIs, you must create your Azure AI Content Safety resource in a supported region. For the language support of Responsible AI team models, please refer to the official external documentation. For other models, please refer to the table below.

Feel free to contact us if your business needs other regions to be available.

📱Query rates

Content Safety features have query rate limits in requests-per-second (RPS) or requests-per-10-seconds (RP10S) . For the query rate limites, please refer to the official external documentation.

If you need a faster rate, please contact us to request it.