Standard Protocol Mode HTTP API Reference¶
This document provides the API reference for the Standard Protocol Mode, detailing HTTP payload structures and the mechanisms for using this integration mode effectively.
Prerequisites¶
For more information, please refer to this doc: Create Azure AI Content Safety
Region Availability¶
- Central US EUAP
- East US 2 EUAP
Feel free to contact us if your business needs other regions to be available.
Authentication¶
Microsoft Entra ID (AAD token)¶
Step 1 - Get the access token. If you are using your own account for test, the token can be get as below by using azure clii.
az account get-access-token --resource https://cognitiveservices.azure.com --query accessToken --output tsv
Step 2 - Assign the Cognitive Services User role to your account. Go to the Azure portal, navigate to your Content Safety resource or Azure AI Services resource, and select Access Control in the left navigation bar, then select + Add role assignment, choose the Cognitive Services User role, and select the member of your account that you need to assign this role to, then review and assign. It might take a few minutes for the assignment to take effect.
For more details, please refer to this doc - Authenticate requests to Azure AI services
Quickstart¶
Join the preview¶
Please file an IcM ticket to us and include the Azure Subscription Id of your Azure AI Content Safety or Azure AI Services resources.
HTTP Request¶
Attention
Currently, the /analyzeWithRaiPolicy
HTTP API only detects the last message. To detect all messages, use the gRPC interface or send multiple requests.
Example HTTP Request¶
Endpoint¶
POST /contentsafety/analyzeWithRaiPolicy?api-version=2024-12-15-preview
{
"raiPolicyName": "<your policy name>",
"messages": [
{
"role": "User",
"source": "Prompt",
"contents": [
{
"kind": "Text",
"text": "I'm very concerned about the homeless in our city. I know they may have their own reasons, but I still don't want to live in such a community or even see them on the street. Can you share your thoughts?"
},
{
"kind": "Image",
"imageBase64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/wcAAwAB/epH1FUAAAAASUVORK5CYII="
}
]
}
]
}
curl --location '<your endpoint>/contentsafety/analyzeWithRaiPolicy?api-version=2024-12-15-preview' \
--header 'Ocp-Apim-Subscription-Key: <your key>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <your token>' \
--data '{
"raiPolicyName": "<your policy name>",
"messages": [
{
"role": "User",
"source": "Prompt",
"contents": [
{
"kind": "Text",
"text": "I'\''m very concerned about the homeless in our city. I know they may have their own reasons, but I still don'\''t want to live in such a community or even see them on the street. Can you share your thoughts?"
},
{
"kind": "Image",
"imageBase64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/wcAAwAB/epH1FUAAAAASUVORK5CYII="
}
]
}
]
}'
import http.client
import json
conn = http.client.HTTPSConnection("<your endpoint>")
payload = json.dumps({
"raiPolicyName": "<your policy name>",
"messages": [
{
"role": "User",
"source": "Prompt",
"contents": [
{
"kind": "Text",
"text": "I'm very concerned about the homeless in our city. I know they may have their own reasons, but I still don't want to live in such a community or even see them on the street. Can you share your thoughts?"
},
{
"kind": "Image",
"imageBase64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/wcAAwAB/epH1FUAAAAASUVORK5CYII="
}
]
}
]
})
headers = {
'Ocp-Apim-Subscription-Key': '<your key>',
'Content-Type': 'application/json',
'Authorization': 'Bearer <your token>'
}
conn.request("POST", "/contentsafety/analyzeWithRaiPolicy?api-version=2024-12-15-preview", payload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
API Request Structure¶
The HTTP request must include a structured buffer
field containing messages
. Each message is uniquely identified and follows a specified protocol to ensure accurate processing.
Field | Type | Description |
---|---|---|
raiPolicyName | String | Specifies the policy name to be applied during content analysis, allowing customers to define custom AI policies for filtering and compliance. |
RaiPolicyKind | String | (optional) Specifies the policy kind to be applied during content analysis. If this value is not filled in, the default is to search by name. Possible values: RaiPolicyInline, PredefinedRaiPolicy, LegacyRaiPolicy, CustomRaiPolicy |
parentPolicyName | String | (optional) When RaiPolicyKind is LegacyRaiPolicy or CustomRaiPolicy , you can specify a LegacyRaiPolicy here to combine it with the current policy. |
- role | String | The role of the entity creating the message. Possible values: All, Assistant, Function, System, Tool, User. |
- source | String | The origin of the message. Values: Prompt, Completion. |
- contents | Array of Objects | A list of content objects. The contents field supports multiple types of content, such as text, images, enabling comprehensive analysis. |
-- kind | String | Type of the content. Possible values: Text, Image. |
-- text | String | The text content (for Text kind). |
-- imageBase64 | String | The base64 encoding of image resource (for Image kind). |
HTTP Response¶
Response Structure¶
The HTTP response from the RAI platform contains the analysis results for each message and content in the request payload.
Example HTTP Response¶
{
"taskResults": [
{
"settingId": "JailbreakBlocklist_Prompt",
"resultCode": "Ok",
"resultCodeDetail": "",
"isBlockingCriteriaMet": false,
"kind": "Blocklist",
"blocklistTaskResult": {
"name": "JailbreakBlockList",
"isDetected": false,
"contentResultDetails": [
{
"messageIndex": 1,
"contentIndex": 0,
"isDetected": false,
"isBlockingCriteriaMet": false,
"details": {}
}
]
}
},
{
"settingId": "Hate_Prompt",
"resultCode": "Ok",
"resultCodeDetail": "",
"isBlockingCriteriaMet": false,
"kind": "HarmCategory",
"harmCategoryTaskResult": {
"harmCategory": "Hate",
"severity": 3,
"riskLevel": "Low",
"harmCategoryDetails": {},
"contentResultDetails": [
{
"messageIndex": 1,
"contentIndex": 0,
"severity": 3,
"riskLevel": "Low",
"isBlockingCriteriaMet": false,
"details": {}
}
]
}
}
]
}
Fields:
Field | Type | Description |
---|---|---|
taskResults | Array of Objects | A list of task results. |
- settingId | String | Identifier for the specific setting applied to the task. |
- resultCode | String | Result code of the task, e.g., Ok. |
- resultCodeDetail | String | Additional details about the result code, if available. |
- isBlockingCriteriaMet | Boolean | Indicates whether the content has reached the threshold for harmful detection. |
- kind | String | Type of task performed, e.g., HarmCategory, Blocklist. |
- blocklistTaskResult | Object | Details about the "Blocklist" task result. |
-- name | String | Name of the blocklist applied. |
-- isDetected | Boolean | Indicates if harmful content was detected by the blocklist. |
-- contentResultDetails | Array of Objects | Additional details related to the blocklist task result. |
--- messageIndex | Integer | Index of the message in the request payload, starting from 0. |
--- contentIndex | Integer | Index of the content within the message, starting from 0. |
--- isDetected | Boolean | Indicates if harmful content was detected in the specific content. |
--- isBlockingCriteriaMet | Boolean | Indicates whether the content has reached the threshold for harmful detection. |
--- details | Object | Additional details about the specific content analysis, if available. |
- harmCategoryTaskResult | Object | Details about the "HarmCategory" task result. |
-- harmCategory | String | Type of detected harm, e.g., Hate. |
-- severity | Integer | Severity level, e.g., 3. |
-- riskLevel | String | Risk level associated with the content, e.g., Low. |
-- harmCategoryDetails | Object | Additional details about the harm category analysis, if available. |
-- contentResultDetails | Array of Objects | Additional details related to the harm category task result. |
--- messageIndex | Integer | Index of the message in the request payload, starting from 0. |
--- contentIndex | Integer | Index of the content within the message, starting from 0. |
--- severity | Integer | Severity level of the specific content, consistent with the overall severity scale. |
--- riskLevel | String | Risk level of the specific content, consistent with the overall risk level scale. |
--- isBlockingCriteriaMet | Boolean | Indicates whether the content has reached the threshold for harmful detection. |
--- details | Object | Additional details about the specific content analysis, if available. |