Your brand is your most important asset. Like any reputation, it takes years to build, yet only seconds to be ruined. Hosts and publishers are responsible for ensuring they’re not sharing content that is hate speech, or content that encourages violence towards a person or a group. Advertisers similarly must ensure their brand isn’t being associated with content that doesn’t match its image. A tool that can detect sensitive topics is necessary to support your brand or publication’s reputation.
This week (June 2022), Google lost a defamation case and was ordered to pay damages for content it hosted (AP Article Link, ABC Article Link). Like the Joe Rogan and Spotify examples in February 2022, these instances of reputational damage are increasing and dragging more and more brands under.
Like cybersecurity, organisations can no longer take risks and hope to clean up effectively when the damage is done. You are obligated to be proactive and take action now on sensitive topics.
Several initiatives are underway to address the complexity around sensitive topic detection, brand safety and brand suitability for contextual ads. One example is the Interactive Advertising Bureau’s (IAB) taxonomy which has included sensitive topic categories since v2.2 (IAB v2.2 Taxonomy Announcement Link). These include:
- Adult & Explicit Sexual Content
- Arms & Ammunition
- Crime & Harmful Acts to Individuals and Society and Human Right Violations
- Death Injury, or Military Conflict
- Online Piracy
- Hate Speech & Acts of Aggression
- Obscenity and Profanity
- Illegal Drugs/Tobacco/E-Cigarettes/ Vaping/Alcohol
- Spam or Harmful Content
- Terrorism
- Sensitive Social Issues
The definitions of what is included or excluded from each of these categories are unclear – as it is not just how the words are used, but rather which words are used that determines the flagging of content.
The definitions vary by geography or subject matter. It also isn’t as easy as just signing up for a text categorisation service as few deal well with spoken word content and most don’t detail their methodology, or are limited to IAB Taxonomies prior to v2.2, which don’t have the sensitive topics categories.
As an advertiser, you need to be able to vet your target content for brand safety and suitability on an episode-by-episode basis to ensure a good contextual fit. This is made more difficult as many dynamic ad insertion providers don’t yet support a standard for sensitive topic flagging or allow you to determine what specific topics or areas you want to avoid or report on the subject content around your ad impressions.
As a content producer, your content should be correctly categorised to appeal to the largest group of advertisers and audience. But — you don’t want ads inserted that aren’t in line with your views.
There is still much work to be done on sensitive topic detention in the fast-growing podcast industry (Sounds Profitable Episode on Brand Suitability).
Sonnant uses machine learning to understand your spoken word content. It understands context and uses this to better extract topics and categorise them, including identifying and highlighting sensitive topics within a few minutes, so that if there’s something in the content that might be sensitive, you’ll be made aware of it faster than your audience is:
You can see how your content is categorised:
And drill down on the details:
Even jump to that section of the content:
Sonnant integrates with Omny Studio and other distribution platforms and sends through automatic intelligently created, contextual and categorised ad markers.
Click here to upload 60 mins of content for free to experience how Sonnant’s AI can detect sensitive content. Hear how one of Australia’s biggest radio stations and podcast producers uses Sonnant to detect sensitive content.
Click here to see how large-scale companies use Sonnant to detect sensitive content.