The Watson ML system and Mantis Brand Safety Engine work together to determine the Brand safety rating for an article, using a set of customisable Brand Safety rulesets.
When an article is analysed a variety of semantic data is extracted from the content and is correlated against thresholds defined in the rulesets.
If any individual element is found to exceed the Brand Safety threshold in the ruleset, then this will automatically trigger a “RED” rating (Content is NOT brand safe).
If none of the extracted data matched in the ruleset, then this automatically triggers a “GREEN” rating (Content IS brand safe).
Finally, if any of the extracted data is matched in the ruleset, but at less than the configured Brand Safety threshold, the article is given an “AMBER” rating (potentially brand unsafe). How this would be used depends on publisher and advertiser preferences. Ideally the ruleset would be tuned for a particular purpose, and use of the Amber rating can be used to balance between very conservative brand safety settings and availability of inventory for campaigns (while still ensuring that all unsafe content can be clearly identified).