Resources Navicon

The Current Viewability Landscape in Programmatic

The press industry has reported that a number of online ad impressions are served, but do not have the opportunity to be seen. Digital media viewability is quickly becoming an important metric to judge campaign performance and the value of online advertising, since an ad that isn’t seen has little value.

As this measurement gains more traction with agencies and advertisers, and the digital advertising industry moves towards transacting in viewable impressions, we must continue to ensure the quality and value of the media that we buy. Before choosing viewability as a primary campaign metric, consider the following points:

Fraudulent impressions are highly viewable

The viewable ad is becoming the new click since it’s in high demand and very easy for a bot to fake. An infected computer runs a hidden browser, which then visits fraudulent domains and interacts with ads. The browser is designed to render all ads and scroll through the page, so that all ads are eventually in the viewport of the browser window and measured as viewable.

When optimizing towards viewability, confirm that fraud is being closely monitored and blocked. To ensure that media partners are not optimizing towards viewable ad fraud, there should be a verifiable human action that is a key performance indicator of the campaign. For example, a registration form should only fire a conversion pixel after a visitor has entered a Captcha.

Viewability performance should be measured as a function of cost, rather than a standalone percentage

Like other performance metrics, viewability performance should be a function of cost (the vCPM). Paying a lower media CPM for a lower viewability percentage in programmatic may be more economical than paying a higher CPM for a higher viewability percentage. Consider the following example:

– Media Partner A: 70% viewability at a $10 CPM

– Media Partner B: 30% viewability at a $3 CPM
Screen Shot 2015-10-21 at 5.33.18 PM

In the above example, if a media partner is evaluated on the viewability rate alone, Media Partner A is the clear winner. However, if a media buyer is evaluating the cost of acquiring each viewable impression, Media Partner B is more cost-effective. If a buyer chose to spend more with Media Partner A, the higher viewability percentage partner, they will essentially be paying more per viewable impression. For every dollar spent, Media Partner B will yield 43% more viewable impressions than Media Partner A.

The MRC standard for viewability doesn’t relate to brand lift

One of the key insights from a 2015 viewability study conducted by IPG Media Lab in partnership with Cadreon, Magna Global and Integral Ad Science shows that display ads that are less than 50% in view (below the MRC standard), but are displayed for two or more seconds have 13% increase in ad recall (Graph 1). The study also showed that ads above the MRC standard (mainly increased by time in view) positively impact ad recall yielding a 16% increase in brand lift (Graph 2). The key driver for both is time in view.

Screen Shot 2015-10-21 at 6.10.09 PM

When determining campaign metrics, it’s important to consider that the MRC standard is not a perfect threshold for brand lift. Examining business goals along with viewability is key to achieving the best results.

Additional viewability considerations

Protecting against fraud, measuring the value of viewable ads relative to cost, and considering an ad’s effect on brand lift are key considerations when measuring viewability. It’s also critical to consider the inconsistency of viewability measurement across providers. To date, the MRC has accredited about 15 technologies that measure viewability, all of which can produce vastly different results when measuring the same impression. While up to 10% variation is considered standard and accepted by the IAB, the variance between two viewability measurement technologies can be up to 30%.

Additionally, viewability is not measurable across all publishers, ad types and browser types. Some highly reputable publishers, such as Facebook, Hulu and YouTube are unmeasurable by any technology. Due to technical limitations with VAST ad delivery, only 55% of publishers are measurable for video ads. Further nuances exist for mobile, both web and app environments.

Low measurability can lead to a distorted picture of performance. Evaluating two partners on viewable rate alone, without accounting for measurability differences, does not allow for a definitive understanding of performance.

Optimizing towards higher viewability rates can drive away from true ad engagement. Consider that smaller ad sizes are more viewable than others, since a larger portion of the ad is likely to be in the viewport at a given time. Optimizing towards higher viewability may mean that agencies gravitate towards smaller ad units and move away from content-rich ad executions that are less viewable, but may drive more brand engagement and brand lift.

In response to growing pressure to become more viewable, some publishers and ad networks have created highly viewable ad units that provide an undesirable ad experience for consumers. Glider units, for example, are auto-play video units that stay in the viewport as the website visitor scrolls through the page content. While these units are highly viewable and have high completion rates, advertisers can agree that they do not want to associate their brand with such disruptive ad experiences. Measuring viewability in exclusion of other performance metrics does not allow for a holistic picture of campaign performance.


As more advertisers use viewability as a yardstick to measure ad effectiveness, we must consider that this arena is still new and nuanced. Keep the following points in mind when measuring viewability:

  1. Viewability rate in exclusion of all other metrics is not an indicator of campaign performance
  2. Optimizing to viewability alone invites fraud
  3. Viewability should be measured relative to cost (vCPM)
  4. The MRC viewability standard doesn’t relate to brand lift
  5. Measurability, media type, and optimization inconsistencies should be considered