Skip to content

Analyze content per model API

Prerequisites

  • An Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region, and supported pricing tier. Then select Create.
  • The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
  • Assign the Cognitive Services User role to your account. Go to the Azure portal, navigate to your Content Safety resource or Azure AI Services resource, and select Access Control in the left navigation bar, then select + Add role assignment, choose the Cognitive Services User role, and select the member of your account that you need to assign this role to, then review and assign. It might take a few minutes for the assignment to take effect.

Region Availability

Region Turing JBC Turing FATE Prompt Shield AdultV3 FaceBlur
Central US
South Central US
Switzerland North
West Europe
Korea Central
Japan East

Feel free to contact us if your business needs other regions to be available.

Authentication

Microsoft Entra ID (AAD token)

Step 1 - Make sure you grant your SP/MI Cognitive Services User role to the target AACS resource.

Step 2 - Get the access token. If you are using your own account for test, the token can be get as below by using azure clii.

az account get-access-token --resource https://cognitiveservices.azure.com  --query accessToken --output tsv

For more details, please refer to this doc - Authenticate requests to Azure AI services

RAI Model categories

This guide describes all of the RAI model categories that uses to flag content.

For Responsible AI team models, please refer to the official content safety external documentation.

Model Name Model Version Model Description
Turing JBC v2.0.4, v2.0.5
Turing FATE v1.8, v2.0
Prompt Shield N/A
AdultV3 N/A
FaceBlur N/A
Florence FlorenceV7 A multimodal model that takes text and image as input and decodes classified categories, supporting Adult (sexual or hardcore scenes), Racy (non-adult but sexy or suggestive), Gory (dead body, bloody or gruesome scenes), Violence (street fighting, war, bomb attacks), Hate (hatred or discrimination), Profanity (excrement), Suicide, and Celebrity (famous personalities).
VisionCLIP N/A A pure vision model that takes an image as input and outputs classified categories, supporting Adult, Racy, Gory, Violence, and Drugs (taking drugs, injecting drugs, or children smoking).
VisionCLIPEmbedding N/A A pure vision model that generates representation embeddings and classifies small categories, supporting Top5Politician (Putin, Obama, Hillary, Trump, Biden), Top50Celebrity (top 50 famous people), Swastika (Nazi-related symbols or people), MiddleFinger, Nipple (female nipple or bump), Posture (sexual posture), Toilet (someone excreting), ChildRacy (suggestive or sexy image of children), Genital (genital shape detection), Darkskin (adult model enhanced for people with dark skin), BodyPainting (adult model enhanced for body painting scenarios), GoryEnhance (gory model enhanced for bloody scenarios), Cannabis (cannabis leaves recognized as drugs), ChildSmoking (children smoking cigar/blunt), ChildAlcohol (children drinking), and SeeThrough (plastic dressing for seeable nudity).
Defensive Prompt Classifier V23 A text-based model using TulrV6 and GPT-4 as a teacher model, supporting Adult, Racy, Gory, Violence, Hate, Profanity, SelfHarm, Celebrity, Drug (hard drugs), Misinformation, War (ongoing wars), and Elections (election disinformation and deepfake detection).

Analyze text content

  1. Replace <endpoint> with the endpoint URL associated with your resource.
  2. Replace <your_subscription_key> with one of the keys that come with your resource.
  3. Optionally, replace the "text" field in the body with your own text you'd like to analyze.

The below fields must be included in the url:

Name Required Description Type
API Version Required This is the API version to be checked. The current version is: api-version=2024-03-10-preview. Example: <endpoint>/contentsafety/analyze?api-version=2024-03-10-preview String

Input

You should see the text moderation results displayed as JSON data in the console output. For example:

The parameters in the request body are defined in this table:

Parameter Type Required/Optional Description
messages ARRAY Required The list of messages to be analyzed.
source ENUM Required The type of content. Supported values: [Prompt, Completion, System, Document].
content OBJECT Required The content of the message.
kind ENUM Required The kind of content. Supported values: [Text, Image].
text STRING Optional If kind is "text", this field must be set to a text value.
image STRING Optional If kind is "image", this field must be set to a "base64" string.
raiModels ARRAY Required The list of models used to analyze the messages.
name STRING Required The name of the model to be used.
sources ARRAY Required The source type of messages the model will analyze.
version STRING Optional The version of the model to be used. Defaults to "latest" if not provided.

See the following sample request body:

{
    "messages": [
        {
            "source": "Prompt",
            "content": [
                {
                    "kind": "text",
                    "text": "I hate you."
                }
            ]
        }
    ],
    "raiModels": [
        {
            "name": "DefensivePrompt",
            "version": "latest",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "BingFate",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "BingJailbreak",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "TextMultiSeverity",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "PromptShield",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "ProtectedMaterial",
            "sources": [
                "Completion"
            ]
        },
        {
            "name": "TxtImgMultiSeverity",
            "sources": [
                "Prompt",
                "Completion"
            ]
        }
    ]
}

Output

You should see the text moderation results displayed as JSON data in the console output. For example:

{
    "modelsAnalysis": [
        {
            "modelName": "DefensivePrompt",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "output": {
                        "Adult": 6.759167E-05,
                        "Racy": 3.3318996E-05,
                        "Profanity": 0.8515625,
                        "Violence": 0.007297516,
                        "Normal": 0.14123535,
                        "HateSpeech": 0.06677246,
                        "SelfHarm": 0.043914795,
                        "War": 0.00019145012,
                        "Violence2": 0.0027256012,
                        "MildRacy": 0.022506714,
                        "Offensive": 0.9741211,
                        "Misinformation": 0.05114746,
                        "PublicFigure": 0.0002875328,
                        "TopPublicFigure": 5.3048134E-05,
                        "TailPublicFigure": 0.0090408325,
                        "TopNickName": 8.094311E-05,
                        "Gore": 0.00041651726,
                        "Drug": 0.0070381165,
                        "Hacking": 0.002653122,
                        "Religious": 0.00037050247,
                        "Election": 0.0016908646
                    }
                }
            ]
        },
        {
            "modelName": "BingFate",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "output": {
                        "Inappropriate": 0.594257,
                        "Offensive": 0.94796216,
                        "SuicideHelp": 0.062445287
                    }
                }
            ]
        },
        {
            "modelName": "BingJailbreak",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "output": {
                        "BingJailbreak": 0.07185127
                    }
                }
            ]
        },
        {
            "modelName": "TextMultiSeverity",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "output": {
                        "MultiSeverity_HateSpeechMature": 0.000559291,
                        "MultiSeverity_ViolenceLow": 0.014842416,
                        "MultiSeverity_SelfHarmNotable": 0.018905446,
                        "MultiSeverity_HateSpeechLow": 0.9583042,
                        "MultiSeverity_SelfHarmDangerous": 5.8505316E-07,
                        "MultiSeverity_HateSpeechScore": 2,
                        "MultiSeverity_SexualOvert": 0.00092538487,
                        "MultiSeverity_SelfHarmLow": 0.023599293,
                        "MultiSeverity_SexualDangerous": 1.43975585E-05,
                        "MultiSeverity_SelfHarmScore": 0,
                        "MultiSeverity_HateSpeechQuestionable": 0.008251599,
                        "MultiSeverity_SexualQuestionable": 0.0011559009,
                        "MultiSeverity_SelfHarmQuestionable": 0.01450358,
                        "MultiSeverity_SexualNotable": 0.0066153924,
                        "MultiSeverity_SelfHarmOvert": 0.001455063,
                        "MultiSeverity_HateSpeechDangerous": 8.0594464E-07,
                        "MultiSeverity_SelfHarmExplicit": 6.402006E-05,
                        "MultiSeverity_ViolenceOvert": 0.0007793661,
                        "MultiSeverity_ViolenceExplicit": 1.7097478E-05,
                        "MultiSeverity_SexualExplicit": 0.00032503586,
                        "MultiSeverity_ViolenceQuestionable": 0.003692853,
                        "MultiSeverity_SexualMature": 0.00032503586,
                        "MultiSeverity_ViolenceDangerous": 8.939696E-06,
                        "MultiSeverity_SexualScore": 0,
                        "MultiSeverity_HateSpeechNotable": 0.91383046,
                        "MultiSeverity_HateSpeechExplicit": 0.000559291,
                        "MultiSeverity_ViolenceNotable": 0.013795364,
                        "MultiSeverity_SelfHarmMature": 0.0003859661,
                        "MultiSeverity_SexualLow": 0.036425155,
                        "MultiSeverity_HateSpeechOvert": 0.002673006,
                        "MultiSeverity_ViolenceScore": 0,
                        "MultiSeverity_ViolenceMature": 0.00053161324
                    }
                }
            ]
        },
        {
            "modelName": "PromptShield",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "output": {
                        "PromptInjection_CrossDomain_Score": 0.00026738262,
                        "PromptInjection_Jailbreak_Score": 0.00013982208,
                        "PromptInjection_CrossDomain_Value": false,
                        "PromptInjection_Jailbreak_Value": false
                    }
                }
            ]
        },
        {
            "modelName": "TxtImgMultiSeverity",
            "version": "latest",
            "resultCode": "NoValidInput",
            "resultCodeDetail": "No valid input for this model.",
            "modelOutput": []
        },
        {
            "modelName": "ProtectedMaterial",
            "version": "latest",
            "resultCode": "NoValidInput",
            "resultCodeDetail": "No valid input for this model.",
            "modelOutput": []
        }
    ]
}
Column Name Data Type Description
modelsAnalysis INT The list of models to be analyzed.
modelName VARCHAR The name of the model (e.g., "DefensivePrompt").
version VARCHAR The version of the model used (default is "latest").
resultCode VARCHAR The result of the model analysis (e.g., "Ok", "NoValidInput").
resultCodeDetail TEXT Detailed description of the result code.
output_key VARCHAR The key of the output parameter (e.g., "Adult", "HateSpeech").
output_value FLOAT The value associated with the output key.

Analyze image content

  1. Replace <endpoint> with the endpoint URL associated with your resource.

  2. Replace <your_subscription_key> with one of the keys that come with your resource.

  3. Optionally, replace the "image" field in the body with your own text you'd like to analyze.

The below fields must be included in the url:

Name Required Description Type
API Version Required This is the API version to be checked. The current version is: api-version=2024-03-10-preview. Example: <endpoint>/contentsafety/analyze?api-version=2024-03-10-preview String

Input

You should see the text moderation results displayed as JSON data in the console output. For example:

The parameters in the request body are defined in this table:

Parameter Type Required/Optional Description
messages ARRAY Required The list of messages to be analyzed.
-source ENUM Required The type of content. Supported values: [Prompt, Completion, System, Document].
-content OBJECT Required The content of the message.
-kind ENUM Required The kind of content. Supported values: [Text, Image].
-image STRING Optional If kind is "image", this field must be set to a "base64" string or "blobUrl".
raiModels ARRAY Required The list of models used to analyze the messages.
-name STRING Required The name of the model to be used.
-sources ARRAY Required The source type of messages the model will analyze.
-version STRING Optional The version of the model to be used. Defaults to "latest" if not provided.

See the following sample request body:

{
    "messages": [
        {
            "source": "Prompt",
            "content": [
                {
                    "kind": "image",
                    "image": {
                        "base64": "<base64 image>"
                    }
                }
            ]
        }
    ],
    "raiModels": [
        {
            "name": "VisionClip",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "VisionClipEmbedding",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "ImageMultiSeverity",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "Florence",
            "sources": [
                "Prompt",
                "Completion"
            ]
        },
        {
            "name": "TxtImgMultiSeverity",
            "sources": [
                "Prompt",
                "Completion"
            ]
        }
    ]
}

Output

You should see the text moderation results displayed as JSON data in the console output. For example:

{
    "modelsAnalysis": [
        {
            "modelName": "VisionClip",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "id": 0,
                    "output": {
                        "Normal": 0.1706800013780594,
                        "Racy": 0.02290000021457672,
                        "Adult": 0.014589999802410603,
                        "Gory": 0.04382999986410141,
                        "Violence": 0.7305899858474731,
                        "Drugs": 0.017410000786185265
                    }
                }
            ]
        },
        {
            "modelName": "VisionClipEmbedding",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "id": 0,
                    "output": {
                        "Top5Politician": 0.00015999999595806003,
                        "Swastika": 0,
                        "MiddleFinger": 0.0013800000306218863,
                        "Nipple": 0.029980000108480453,
                        "Posture": 0.4560999870300293,
                        "Toilet": 0.02930999919772148,
                        "ChildRacy": 0.14774000644683838,
                        "Genital": 0.05536000058054924,
                        "DarkSkin": 0.11100000143051147,
                        "BodyPainting": 0.16402000188827515,
                        "GoryEnhance": 0.04554999992251396,
                        "Top50Celebrity": 0.00019999999494757503,
                        "UnSafe": 0,
                        "Politician": 0.00015999999595806003,
                        "Celebrity": 0.00019999999494757503
                    }
                }
            ]
        },
        {
            "modelName": "ImageMultiSeverity",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "id": 0,
                    "output": {
                        "Img_MultiSeverity_HateSpeechScore": 0,
                        "Img_MultiSeverity_SelfHarmScore": 0,
                        "Img_MultiSeverity_SexualScore": 6,
                        "Img_MultiSeverity_ViolenceScore": 2
                    }
                }
            ]
        },
        {
            "modelName": "Florence",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "id": 0,
                    "output": {
                        "Normal": 0.0001,
                        "Racy": 0.0051,
                        "Adult": 0.0118,
                        "Gory": 0.9782,
                        "Violence": 0.0032,
                        "Celebrity": 0.0002,
                        "Hate": 0.0002,
                        "Profanity": 0.0002,
                        "Suicide": 0.001,
                        "VisNormal": 0.0001,
                        "VisRacy": 0.0051,
                        "VisAdult": 0.0118,
                        "VisGory": 0.9782,
                        "VisViolence": 0.0032,
                        "VisCelebrity": 0.0002,
                        "VisHate": 0.0002,
                        "VisProfanity": 0.0002,
                        "VisSuicide": 0.001
                    }
                }
            ]
        },
        {
            "modelName": "TxtImgMultiSeverity",
            "version": "latest",
            "resultCode": "Ok",
            "resultCodeDetail": "",
            "modelOutput": [
                {
                    "id": 0,
                    "output": {
                        "InterleaveTxtImg_MultiSeverity_HateSpeechScore": 0,
                        "InterleaveTxtImg_MultiSeverity_SelfHarmScore": 0,
                        "InterleaveTxtImg_MultiSeverity_SexualScore": 0,
                        "InterleaveTxtImg_MultiSeverity_ViolenceScore": 4
                    }
                }
            ]
        }
    ]
}
Column Name Data Type Description
modelsAnalysis INT The list of models to be analyzed.
modelName VARCHAR The name of the model (e.g., "VisionClip").
version VARCHAR The version of the model used (default is "latest").
resultCode VARCHAR The result of the model analysis (e.g., "Ok", "NoValidInput").
resultCodeDetail TEXT Detailed description of the result code.
output_key VARCHAR The key of the output parameter (e.g., "Adult", "Racy").
output_value FLOAT The value associated with the output key.