When you have multiple ad variants for a campaign, Brainsight helps to predict the variant that perform best on visual performance. Currently, our benchmarked advertising templates cover Display Ads, Social Ads and (Digital) Out-of-Home. Brainsight helps to pretest your designs in order to optimize them before starting your campaign. The performance prediction is based on a combination of a benchmark -our database with 10.000+ competing ads from all over the world-, our set of templates to simulate in situ ad context, and our attention prediction algorithms. Those predictions have a 94% accuracy rate compared to live measurements (i.e. eye-tracking).
The design-templates are images of situations where your ad might appear. For social posts this is for example a LinkedIn feed, for display ads it is a set of landing pages and for (Digital) Out-of-Home it is a busy shopping area or train station. Per category, dozens of situations have been selected, also taking account with circumstances like dark / light mode for digital / online ads or daylight versus nighttime for outdoor situations. See the example below:
Brainsight has fully automated the ad placements and resizing efforts. This is how it works. First, all your uploaded ads are automatically placed in the templates like the one above. Next, our engine does all the work:
A) It creates a heatmap that predicts viewers' attention of the ad, including the distractions of the design template.
B) It automatically marks your ad (drawing a box around it) and calculates how much of the viewers' attention within the template is drawn to your ad. In the example below you can see that the ad draws 22% of the viewer's attention.
But what does that number tell you? In order to give a weight to it, you'll need a baseline. This is where our benchmarks come in.
C) For each template and template category, we have pretested and analysed 10.000+ ads, calculating their relative attention scores. This resulted in a data-set with a distribution where we can plot high and low scoring ads. See the graph below. Brainsight verifies your ad score (in this example, 22%), and returns the percentile from the benchmark. For our example, a 22% relative attention seems to be better than 65% of the benchmark. To bring that score back to a more commonly used score, we converted the final Attention score to a number between 0 and 100. When your ad scores 65, it's better than 65% of the benchmarked, competing ads, which in this case is a good score.
Choose one of the advertising categories and select the type of ad(s) you want to predict its impact. In our example we'll pick "Display Ads impact prediction".
We have uploaded 4 ad variants for our example. The report shows automatically which ad will be most salient (attention grabbing) for when published. As explained above, an attention score of 65 is good, above 80 it's excellent, meaning your ad is more salient than 80% of the benchmark. Congrats if you reach those scores!
The benchmark report also generates the Clarity scores per ad. The Clarity score is explained in a different article, but in essence it is a score between 0 - 100 that tells you how clear (or cluttered) your ad is. Does it have clear focus-points for the viewer / visitor when they are exposed to it (the first 3-5 seconds). Keep in mind that a high clarity score has no value when the attention score of the ad is poor (a great quality ad that is not see is not a great ad). Ensure that it is balanced.
Therefore, look for the right combination. A helpful feature in is the Benchmark Chart, as shown below. Look for the ads with a good Attention- and Clarity score. If you are still in the designing phase, you use the insights to pick one or two winning ads and optimize them before having them compete with each other in a second benchmark-test..