We use cookies to deliver the best possible experience on our website. To learn more, visit our Privacy Policy.
By continuing to use this site, or closing this box, you consent to our use of cookies.X

Holiday Advertising Playbook 2024 Download Playbook New

State of Brand Safety & Suitability in Video Download Now

Silverpush Launches AI-Powered Brand Suitability Platform – Mirrors Safe

PUBLISH DATE: 20 May 2020

Mirrors Safe uses computer vision to provide unequaled brand suitability for in-video ad placement.

Singapore, 20 May 2020: Silverpush has today announced the launch of its new AI-powered brand suitability platform – Mirrors Safe. Silverpush is well-acknowledged for its AI-powered contextual video advertising and real-time moment marketing products that enable brands to achieve unprecedented reach and user engagement.

Brand safety poses a serious risk to brands. Research shows that 80% of customers will not buy products at all or reduce their buying of products from brands that place ads across any type of harmful or offensive content. 70% of the customers hold the brand or agency for hurtful ad placement.

By using computer vision to identify key contexts in video, Mirrors Safe overcomes the limitations of conventional brand safety methods such as keyword-based blacklists and whitelisted channels. It accurately identifies key contexts in videos such as celebrities, objects, brands, emotions, scenes and activities and filters out harmful content across a broad range of brand unsafe categories including terrorism, violence, nudity, hate speeches, smoking, etc.

Mirrors Safe makes use of an advanced algorithm for calculation of brand suitability score. This comprehensive score takes into account five parameters. This score measures safety and suitability of the content, page and channel. The five parameters are –

  • Engagement: likes, dislikes & participation that the content generates
  • Safety: exclusion through identifying key contexts , on-screen text, and audio sentiment analysis
  • Influence: organic influence that channel/page/content creates
  • Relevance: how relevant is the content in terms of its peer channel/page category
  • Momentum: consistency that channel/page maintains or grows in terms of engagement

Silverpush’s CRO, Kartik Mehta, said: “What sets Mirrors Safe apart is its ability to custom define the scope of harmful contexts, that are unique to every brand. Thus, helping brands move beyond just brand safety to a truly brand suitable environment. This is limited with existing keyword and natural language processing (NLP) based blanket exclusion technologies, as these often fail to understand the complex undertones and various contexts words can be used for”.

Silverpush used Mirrors Safe to analyze about 15 million videos across the largest video hosting and sharing platforms in the South East Asia region using Mirrors Safe. The analysis found 8% to 9% of the video content as brand unsafe, i.e roughly 1 in 10 videos has some type of brand damaging content.

Silverpush compared traditional brand safety measures with Mirrors Safe to identify nudity and adult contents in videos. Result was amazing as Mirrors Safe identified 300% more unsafe videos compared to conventional brand safety methods.

This finding brings into light the inefficacy of the traditional brand safety measures and the potential harm they can do to a brand’s image. The use of traditional measures has led to serious brand safety issues for some of the biggest video platforms.

“Mirrors Safe further addresses one of the most pressing brand safety challenges of content over-blocking – a result of blanket exclusion measures offered today. This significantly limits campaign performance and often forces marketers to switch off controls in favor of reach. Mirrors Safe’s key context technology prevents over-blocking and only filters videos that actually feature unsafe contexts, ensuring brand safety without hampering monetization and performance” – Mehta added.

Visit www.silverpush.co/mirrors-safe/ to know more about Mirrors Safe.