Jason Calacanis’ spectacular dissection of Comscore’s recent report into the blogs audience has highlighted for me the real difficulty of comparing panel-based data with website metrics. We see this here time and again: panel-based reports almost seem to be describing a different world to the one we’re seeing in our usage reports.
It seems to me that panels are good at providing the “soft” stuff that advertising agencies love: specifically, demographics on audience, data like mean household incomes. This is the meat and drink of the advertising world, and the cold page view and unique user figures that webheads like me thrive on is an empty vessel without it when talking to a potential advertiser.
Where panels seem to go horribly awry is in anticipating the unique usage patterns of major websites. Huge amounts of user activity on big websites is distributed among users who may only visit the site once in any given month, and none of this activity gets recorded by a panel survey. It’s as if the panel is describing one type of audience while completely ignoring the other. I think this is a major problem for panels: how do they describe the audience that visits my website via Google to read one article on an obscure subject and then goes away again? Because the long tail dictates that all those single-page visits add up to a great big bucket of viable ad inventory.
So, on the one hand, it’s great to see the kind of demographic research which Comscore has done here (and, to be fair, this is the research that Nick Denton highlights in his post on the subject called, appropriately enough, “Blog readers are sexy”, which is exactly the kind of assertion panel-based research gives you permission to make). They should just have been a little more circumspect in their use of web metrics. I mean, it doesn’t take a rocket scientist to realise that, in the real world, Gawker doesn’t really have a bigger audience than Slashdot, does it?