Realeyes is on a mission to improve consumer data quality, which we believe will fix flawed decisions coming from 40%+ fake data that has historically been too difficult to identify while conducting consumer insights market research. This issue is affecting nearly $1 trillion in business decisions corporations.
That’s why we signed onto the consumer Data Quality Pledge and why we created Verify, which combines cutting-edge AI technology and lightweight facial verification to fight bots and tackle user fraud dramatically better than CAPTCHA and other solutions.
Verify ensures that collected consumer data remains untainted by many issues, including fake engagement from bots, click farms, and professional respondents masquerading as other personas. This will provide advertisers with an accurate and quality sample every time.
We are constantly exploring expanded use cases for Verify and recently demonstrated some impressive results combining it with Realeyes’ PreView ad testing solution. PreView provides attention & emotional engagement AI-driven insights that guide creative adjustments during the creative production process.
Our hypothesis was that the business decision made based on the results of any market research is wrong if the sample quality is low.
We tested four ads from a home furnishing provider and three from a financial services firm to identify which ads scored highest in brand favorability (how the person would describe their overall attitude toward the following tax preparation software services) and usage intent (how likely the person was to make a purchase).
Before viewing the ads, people had to go through the Verify facial verification to differentiate high quality respondents from fraud, such as either robots or duplicate viewers or misrepresented demographic information.
Verify is designed to proactively filter out fraud in real-time before it reaches the survey environment, but for this test we let it through to see how it would impact the PreView results. We then compared the survey results by comparing the full sample size against those who were verified by Verify to be unique quality users that fit the target demographics.
The results clearly demonstrated that panel quality does affect the insights and outcomes of consumer insights market research -- when filtering the data on quality users (filtering out bots, duplicates and demographic fraudsters), the brands would have chosen a different ad to run.
For instance, fraudulent or unacceptable quality users scored an ad with a brand favorability score +6.1% whereas quality users scored that +3.2%, creating an overall score of +4.3%. By including bots and fraudulent users, the company would select this as the top-performing ad. But if you removed those false results, it was actually the third-worst performing ad out of four.
The same ad, when including fraudulent users, scored highest in usage intent, which refers to how likely the recipient is to use the company’s services. It finished last when only scoring acceptable users.
What does that mean? For example, perhaps a company should use the CTA with the female hero and the red car, but flawed data tells them instead to use a male facing a sunset instead.
The results demonstrated just how important Verify is to the ad-testing landscape. Failing to eliminate fraudulent users means a company will invest resources into underperforming ads, while never putting any spend behind the ads that scored the best with real consumers.
This was just a limited study of a particular campaign. As we expand our research, we expect to find similar results. While including fraudulent users may end up not changing the results of some campaigns, is that a risk you want to take? Or is it a better strategy to know for sure that the answers you’re getting are legitimate and authentic? That way, you can confidently allocate spend on the creative you know scored best with actual humans.