Updates
Llama-Guard 3 on Moderation API
Llama Guard 3, now on Moderation API, offers precise content moderation with Llama-3.1. It’s faster and more accurate than GPT-4, perfect for real-time use and customizable for nuanced moderation needs.
A changelog of the most important and interesting updates for Moderation API.
Updates
Llama Guard 3, now on Moderation API, offers precise content moderation with Llama-3.1. It’s faster and more accurate than GPT-4, perfect for real-time use and customizable for nuanced moderation needs.
Context is crucial when handling content moderation. One thing might seem innocent in one context, but hateful in a different context. You can already supply contextId and authorId with content, and this can help you understanding the context when reviewing items in the review queue. Now you can also enable
Update: since the creation of this post we've also added Llama Guard 3. Llama Guard 3 is now the recommended model for AI agents. Read about Llama Guard here. OpenAI have just released their latest model GPT-4o-mini. We're excited about the updated model and are already
Updates
I'm excited to announce the launch of the upgraded analytics dashboard that will provide deeper insights into user behaviour and content trends on your platform. With a privacy-first design, these analytics tools will allow you to track and understand how people are using and potentially abusing your platform,
Updates
You can now add custom label filters for review queues. This allows you to create queues like: * Show items with the POSITIVE label to find positive user comments. * Show items where the TOXICITY label has a score between 20% and 70% to find content where the AI is uncertain. * Filter
Updates
I'm excited to announce that you can now moderate images with moderation API. Setting up image moderation works similarly to text moderation. You can adjust thresholds, and disable labels that you do not care about when flagging content. We offer 9 different labels out of the box -
We are thrilled to kick off 2024 with a host of exciting new features, and we have many more in store for the year ahead. Label Thresholds In your moderation project, you now have the ability to adjust the sensitivity per label, providing fine-grained control over content flagging. Additionally, you
We've just made 4 new classifier models availabel in your dashboards. Sexual model - Moderation APIModeration APIDiscrimination model - Moderation APIModeration APISelf harm model - Moderation APIModeration APIViolence model - Moderation APIModeration API
* New features: * Add your own options to actions. Useful if you need to specify why an action was taken. For example an item was removed because of "Spam", "Innapropriate", etc. * Select specific queues an action should show up in. * Performance improvements: much better reponsiveness and speed
Updates
We have been hard at work to develop new features and enhance the experience with Moderation API. Today, we are incredibly excited to announce: 1. A brand-new feature to create and train custom AI models 🛠 2. A new Sentiment Analysis model 🧠🌟 Introducing Custom Models 🌟 Say hello to the era of
Updates
A new sentiment model just became available in your dashboards. In our evaluations the new model seems to surpass other solutions on the market when understanding underlying sentiment in more complex senteces. This is probably due to the underlying large language model with its remarkable contextual understanding. The model detects
Updates
We've released a new NSFW ("Not Suitable For Work") model for detecting NSFW or otherwise sensitive text. However, it's still in the experimental stage, so we recommend using it alongside your existing models. The model can detect and categorize UNSAFE or SENSITIVE content. It