Analyze a video for inappropriate content using the Mux Robots API.
Analyze a video for inappropriate content. Mux samples thumbnails from the video and scores each one for sexual and violent content. You can configure the thresholds that determine whether content gets flagged. See the Moderate API referenceAPI for the full endpoint specification.
moderate jobcurl https://api.mux.com/robots/v0/jobs/moderate \
-H "Content-Type: application/json" \
-X POST \
-d '{
"parameters": {
"asset_id": "YOUR_ASSET_ID",
"thresholds": {
"sexual": 0.7,
"violence": 0.8
}
}
}' \
-u ${MUX_TOKEN_ID}:${MUX_TOKEN_SECRET}| Parameter | Type | Description |
|---|---|---|
asset_id | string | Required. The Mux asset ID of the video to moderate. |
language_code | string | Language code for transcript analysis on audio-only assets. Defaults to en. |
thresholds | object | Score thresholds that determine whether content is flagged. |
thresholds.sexual | number | Score threshold (0.0-1.0) for sexual content. Defaults to 0.7. Lower the value to be more strict (e.g. 0.5 to flag borderline content). |
thresholds.violence | number | Score threshold (0.0-1.0) for violent content. Defaults to 0.8. Lower the value to be more strict. |
sampling_interval | integer | Interval in seconds between sampled thumbnails. Minimum 5. For example, 10 samples a frame every 10 seconds. Good when you want consistent coverage regardless of video length. |
max_samples | integer | Maximum number of thumbnails to sample. Samples are distributed evenly across the video with the first and last frames pinned. For example, 20 on a 10-minute video samples roughly every 30 seconds. Good when you want predictable cost per job. |
When the job completes, the outputs object contains:
| Field | Type | Description |
|---|---|---|
thumbnail_scores | array | Per-thumbnail moderation scores, each with sexual and violence fields (0.0-1.0). Also includes a time field (seconds) for video assets; absent for transcript moderation. |
max_scores | object | Highest scores across all thumbnails, with sexual and violence fields. |
exceeds_threshold | boolean | true if any category's max score exceeds its configured threshold. |
{
"data": {
"id": "job_def456",
"workflow": "moderate",
"status": "completed",
"units_consumed": 1,
"parameters": {
"asset_id": "YOUR_ASSET_ID",
"thresholds": {
"sexual": 0.7,
"violence": 0.8
}
},
"outputs": {
"thumbnail_scores": [
{ "time": 0.0, "sexual": 0.01, "violence": 0.02 },
{ "time": 5.0, "sexual": 0.03, "violence": 0.05 }
],
"max_scores": {
"sexual": 0.03,
"violence": 0.05
},
"exceeds_threshold": false
}
}
}Moderation works by sampling thumbnail frames from your video. You can control how many frames are analyzed with sampling_interval or max_samples. More samples give better coverage but increase processing time and cost.
If content safety is critical for your platform, lower the thresholds and increase the sample density:
{
"parameters": {
"asset_id": "YOUR_ASSET_ID",
"thresholds": {
"sexual": 0.3,
"violence": 0.4
},
"max_samples": 50
}
}With lower thresholds, even mildly suggestive or mildly violent content will cause exceeds_threshold to return true, giving you a signal to flag the video for human review before publishing.