Content Moderation & Image Analysis
Deploy forensic-grade moderation pipelines you can rely on. Automatically classify and quarantine high-risk visual assets before they permeate your infrastructure.
Detection Capabilities
QUERY: SELECT * FROM moderation_classesAdult Content & Nudity
Classify NSFW content into different levels of nudity and suggestiveness. Refine your filtering by using additional classes and context information.
Violence & Harm
Detect displays of physical assault and battery, physical harm, executions and other shocking violence-related imagery.
Hate & Offensive Gestures
Classify displays of hateful, discriminatory or offensive symbols, scenes and gestures into multiple subclasses.
Gore & Horrific Imagery
Detect images and videos containing gory, bloody or otherwise horrific imagery related to wounds, harm, death.
Weapons & Firearms
Classify firearms, knives and weapons into multiple subclasses from direct threats and self-harm to not-in-use.
Recreational & Medical Drugs
Detect different types of recreational drugs, whether smoked or injected, as well as medical drugs.
Gambling, Money & Banknotes
Detect scenes with displays of money, banknotes, as well as gambling activity, casino games, roulette etc.
Alcohol & Tobacco
Detect scenes with alcoholic beverages or showing people drinking, as well as tobacco products and smoking.
People & Demographics
Detect people and faces. Determine if a scene contains a minor, even if the minor's face is hidden or invisible.
Media Attributes & Quality
Evaluate the quality of an image along with its main characteristics and attributes: type, blurriness, brightness, dominant colors.
Text & QR Codes
Detect embedded text in images. Moderate text and flag profanity, hate or personal identifiable information (PII).
Granular JSON Intelligence
Our moderation endpoints return highly structured, forensic-level probability matrices. Programmatically route assets based on precise confidence thresholds across dozens of overlapping classes.
- Sub-millisecond bounding box localization
- Compound risk scoring (e.g., Weapons + Minors)
- Immutable audit trails for compliance reporting
"status": "success",
"request": {
"id": "req_8fA2pQzL9vM",
"timestamp": 1712345678.12
},
"moderation": {
"weapons": 0.982,
"weapon_classes": {
"firearm_in_hand": 0.915,
"knife": 0.012
},
"gore": 0.003,
"drugs": 0.001
}
}
Enforce Platform Integrity
Integrate the moderation API instantly. Route problematic assets to human review or auto-quarantine based on your risk matrix.