Image moderation now available
I'm excited to announce that you can now moderate images with moderation API. Setting up image moderation works similarly to text moderation. You can adjust thresholds, and disable labels that you do not care about when flagging content.
We offer 9 different labels out of the box - all available through a single model.
- Nudity: Exposed male or female genitalia, female nipples, sexual acts.
- Suggestive: Partial nudity, kissing.
- Gore: Blood, wounds, death.
- Violence: Graphic violence, causing harm, weapons, self-harm.
- Weapon: Weapons either in use or displayed.
- Drugs: Drugs such as pills.
- Hate: Symbols related to nazi, terrorist groups, white supremacy and more.
- Smoking: Smoking or smoking related content.
- Alcohol: Alcohol or alcohol related content.
In case you need any other labels, please let us know.
Support in the review queue
Image moderation is also working seamlessly with the content review queue. Here you'll be able to see a small preview of the images, and otherwise perform actions just as you do with text.
New endpoint
The new capabilities are used via a new endpoint for image moderation.
You can find the API reference here:
Or use our Typescript package from NPM here: