New bot exploit shows ads.txt is good - but not enough
In late 2018, DoubleVerify’s Fraud Lab identified a new bot network designed to generate a high volume of non-human traffic across hundreds of websites. The bots scraped content from legitimate - and sometimes premium - websites and generated false sites with URLs that looked like the real thing. These fake sites had content stolen from legit sites with fraudulent ads blanketing the pages.
What’s important about this particular exploit is that it was specifically designed to take advantage of the ads.txt framework in order to commit fraud that would not trigger ads.txt’s alarms.
This is bad news for premium content publishers. This is “Oh, crap, I need a belt AND suspenders” bad news.
Since its approval from the Interactive Advertising Bureau, or IAB, ads.txt. - which is an acronym for Authorized Digital Sellers - has been widely (though not universally) adopted by digital publishers. It’s basically a text file that sits on the publisher’s servers and lists all advertisers authorized to be on the site. Not on the list? Not on the site. That’s how it’s supposed to work, at least.
This gives brands and buying agencies assurance that their ads are being seen by human beings. On top of third-party verification software, viewability monitors and the like, publishers have been in a constant slog to prove that their ads are not being “viewed” by click farms, bots and the like.
The exploit identified by DoubleVerify shows the weakness in an ads.txt-only approach to combatting ad fraud on premium websites. It also shows the inventiveness and cunning of bad actors in the murky underbelly of digital advertising, which is fueled by the age-old truism that there always will be people in any business ecosystem who will filch and pilfer rather than work.
The lesson to publishers: When fighting ad fraud, don’t bring a knife to a gun fight. Bring a knife, a gun, some brass knuckles, a collapsible Bo staff and your best choke hold.
Because the bad guys will be.