top of page
  • NOBL Blogger

The Ad Industry’s Solution for Toxic Content is… well, Toxic.

Updated: Feb 16, 2021

New technology is available for advertisers, but are they ready and willing to take a step forward?


Faced with an environment that contains increasingly polluted digital content, advertisers are attempting to protect themselves by using blocklists. This technique is intended to protect brands by ensuring that ads are not placed next to ‘toxic’ content based on a list of key words.


But blocklists, and their lack of precision, fail to sufficiently protect advertisers and create a variety of other significant problems.


Inherent deficiencies in blocklists

Blocklists target words that advertisers believe are dangerous. They operate under the assumption that the presence of a word such as “sex” indicates that an article contains salacious content, or that words such as “shooting,” “bomb,” or “dead,” indicate that an article contains violence that would be damaging to the brand.


This is flawed logic, however, because it fails to consider context. The word “dead,” which blacklist providers rank as the no. 1 term blocked by advertisers, according to adexchanger.com, poses little risk when the article is about “The Walking Dead” or “The Grateful Dead”. In fact, the word “dead” when used in contexts such as “dead heat,” “dead end,” “dead lift,” and so on, is innocuous.



Even if it was effective to block articles based on words, the solution is unscalable because it is impossible to anticipate words that may pose danger. (For example: the word “bread” is often used by terrorists instead of “bomb”.)


Yet advertisers continue to use blocklists with a false sense of security, believing that they are protected. Ad networks do not monitor where the ads are placed and therefore cannot notify their client when blocklists fail. Most advertisers are unaware of their exposure and risk unless someone from the general public brings it to their attention (usually via social media where the problem is aired for the world to see!).


When their advertisements are discovered in unsavory and dubious places, organizations are finding themselves thrust into public damage control.

In September, the New York Times reported that the Los Angeles Police Department had started an inquiry after a recruitment ad appeared on Breitbart. The department said the screenshot of the ad, which circulated on Twitter, created “a negative juxtaposition to our core values.” Nobl reviewed its database of questionable content and found ads for Square and other mainstream brands on articles that are hyper-partisan and even slanderous.

Advertisers who recognize this risk often create even more extensive lists of words in order to compensate. But the imprecision of blocklists is magnified when more words are added. They overzealously block even more content and reduce the inventory of pages available to advertisers.


This is the problem that Fidelity Investments faced earlier this year when their blocklist included words like “immigration”, “racism” and “Trump”. The Wall Street Journal reported that there was difficulty placing Fidelity’s ads on sites “because the list is so exhaustive and the terms appear in many articles.”



The power of association

Advertising enhances brands by creating a relationship between the advertiser and the content they sponsor. Recent research on digital advertising confirmed that placement next to high-quality content creates a so-called, “halo effect” that includes improved recall, greater engagement and more positive perceptions for the associated brand.


In short, when a legitimate brand is displayed next to legitimate news, the association benefits both. But the symbiotic relationship is disrupted if either the content or the ad is disreputable. And the damage can cut both ways.


This is why traditional publishers were careful about what advertising they accepted. An illegitimate ad can diminish trust in the publisher’s content.


And, of course, advertisers intuitively understand that placement next to low-quality content can contaminate a legitimate brand. But what may be less obvious is that a well-known brand presented next to fake news may lend legitimacy to the specious content and contribute to the deception of consumers.


Digging a deeper hole

Despite the issues — and the consequences — advertisers continue to rely on blocklists according to Lucinda Southern of Digiday.



The short-term effect is a significant reduction in inventory. One recent report stated that publishers have reported unwarranted blocks from 30% of inventory up to 90%.


The continued use of blocklists will eventually influence how news reporters write and how news services cover news. Publishers will avoid words that advertisers avoid. They will either rely on euphemisms and indirect language or avoid the topics altogether, creating vast news deserts.


For example, how could publishers financially support the coverage of the Supreme Court, which often hears controversial cases on abortion, gun laws, and the death penalty? Each of these contain words that would preclude the support of advertising revenue.

Long-term, the protracted use of blocklists will threaten democracy. Blocklists create a chilling effect on information that is critical for an “enlightened citizenry,” as Thomas Jefferson described it, and “for the proper functioning of a republic.”


Solutions

It’s fair to say, the sooner advertisers abandon blocklists and adopt more modern and effective methods, the better. Fortunately, there are alternatives to blocklists — using 21st century technologies — that protect advertisers’ brands from toxic content and make the online content ecosystem better for everyone.


For example, Nobl’s solution uses a combination of machine learning and natural language processing to evaluate context and quality. This approach is more precise and doesn’t create unintended consequences for publishers or consumers.


In fact, by using Nobl, advertisers are able to identify more suitable inventory across more sites while avoiding questionable content, even when that content is on a reputable site.


Nobl’s technology evaluates the text of an article and, based on the linguistic patterns and characteristics, is able to classify and score the toxicity of an article.


In October, Daily Mirror publisher, Reach, said 40% of traffic across news, sports, technology and celebrity-related articles was actually brand-safe, but it wasn’t monetized. To counter this, it developed a tool, Mantis, using IBM Watson’s machine learning, natural language processing to identify articles that are safe but have been blocked. Publishers like The Telegraph as well as agencies are talking with Reach about how they can use Mantis.

But publishers using these tools will only go so far if it’s not adopted by agencies and advertisers. Against that backdrop, it’s only a matter of time before another brand-safety blunder will force agencies to jack up the brand-safety dials.


It’s time for the online publishing and advertising world to step up and do the right thing. Increase the inventory available to advertisers by abolishing blocklists. Stop supporting toxic content that is undetected by blocklists. Ensure news coverage is monetizable and support the legitimate content providers struggling to stay in business. Tools like Mantis and Nobl can enable all of this if ad networks, advertisers, and publishers really want to make a difference.


bottom of page