Direct Response Marketing Agency - Chief Media

Measuring Success During a Video Consumption Sea Change

John Wanamaker famously said, “Half of the money I spend on advertising is wasted. The problem is, I don’t know which half.” In the 90-plus years since those words were uttered, the marketing industry has attempted to solve this problem through science and data. The digital ecosystem, and the deterministic data it is afforded, has taken incredible strides in identifying the half that works and the half that doesn’t; however, it is only now that television and video inventory is moving in this direction as well.

Building accurate video attribution technology is a difficult endeavor, presenting significant challenges across multiple marketing channels. The primary issue with video marketing is understanding who watched an ad when the video medium has no ability to provide click-through continuity — linear TV is a classic case of this because viewership data at scale is not yet readily available for the marketer. Designing algorithms to provide digital-like results for these mediums is a core investment for high-performing marketing agencies and why tech teams in the video space dedicate so much effort to the research and development of attribution technologies.

Attribution design that handles many levels of granularity is essential for campaign decision-making. It’s impossible to analyze the best station/channel mix if looking at the weekly-level lift for a campaign. Similarly, campaign-level budget decisions shouldn’t be made using results from a model that runs analysis on a minute-by-minute basis.

For user-level video attribution, the most fundamental problem to solve is simulating clickthrough when it doesn’t exist. Any social media platform ad click will have a user and all their associated metadata attributed to it. When a consumer views a video ad and simply picks up their phone to call, that connection is lost entirely.

There are several layers to a good video-to-web attribution system design when first-party viewership data is unattainable. Firstly, we need to understand what existing web traffic looks like prior to introducing video media. This sets up a foundational baseline from which to work. Secondly, we need to understand how the introduction of video drove campaign-level lift.

Testing to measure statistically significant increases in traffic is a good approach here. The system then needs to measure instantaneous response to media; for example, a spike in web traffic following a TV ad on a high-impression station. If we can isolate a spike in traffic, we can then see the users that drove the spike.

At the user level, things get more complicated. The main issue is inferring which users were baseline traffic and which users drove the spike following the ad. This is where machine learning comes in. A simple approach might be to prioritize mobile traffic over all others, but what if a campaign’s digital ads are targeting mobile specifically?

Machine learning allows the ingestion of perhaps hundreds of attributes about a user to produce a mathematical outcome that would otherwise be far too complex to calculate. The advent of machine learning in forms that are accessible to small businesses and agencies has been a critical boost for attribution tech.

Interpreting results from these systems is made more difficult by multi-touch marketing dynamics. Not only do most marketing campaigns run on multiple distinct platforms, but in the video space, the same ad may be multi-screen on all of linear TV, connected TV (CTV), and video-on-demand (VOD). Video attribution cannot exist in a single-platform silo, but also must incorporate the feedback mechanisms between these platforms. Furthermore, balance is critical. It’s altogether too easy to micromanage based on the overwhelming amount of data these platforms generate.

It is essential for marketers and agencies to have the intelligence and means to utilize advanced data sets to make attribution decisions but — at the same time — put a scientific process in place to determine success when deterministic datasets are not available. The net result is allowing us greater insight to determine which placements are in fact working and which ones are not, even in the absence of deterministic data — something Mr. Wanamaker would certainly have liked.

Anthony Baum is head of data science and analytics at Chief Media. He can be reached at (518) 894-8116 or via email at [email protected].