It is no surprise that radio's newest debutante has been found to be something of a hooker.
The revelation this week that listenership figures provided by NetDynamix
to online radio stations such as 2oceansvibe
, were shown to be incorrect.
It was only a matter of time before online radio stats
were questioned and pretty much predictable that this latest addition to the industry would be found to be less than the innocent young virgin it was made out to be.
The past 18 years has seen the radio environment become competitive to the point of bloodletting. Competitive to the point where station managers and media owners will try to push home every possible advantage. I am not saying that they all crook the listenership books intentionally, but they do tend to resort to smoke and mirror tactics in trying to turn the tiniest of advantages into something of awe-inspiring proportions.
Margin of error
The problem is that radio listenership data collection has never ever been an exact science. It wasn't all that long ago the SA Advertising Research Foundation admitted quite openly that its RAMS radio numbers were subject to a margin of error of about 45 per cent.
Admittedly since then, that margin of error has reduced as sample numbers have increased but with the best will in the world listenership figures are still nowhere near being able to be used as a bible. But, rather just to illustrate trends.
This applies particularly to community radio stations where data is completely nonsensical. Again SAARF simply cannot afford to research community radio properly. It just hasn't got the resources.
But, that doesn't stop community figures from being published. Figures that ultimately show very poor listenership, mainly because the sample involved is way outside of a particular community radio's footprint.
Frankly, my advice to advertisers with regard to radio is simply to measure the impact of the advertising and make future decisions on the success or failure of radio advertising campaigns rather than rely entirely on listenership figures.
And now that online radio has made its debut, there were those who were expecting far more accurate measurement.
Trouble is, although it is possible through the use of a simple iPhone app, for example, to be able to tell at a glance precisely how many visitors there are on a website in real time, its become a lot more difficult to measure retrospectively.
Hopefully, the whistle-blower in the latest online radio data scandal, IT specialist, Shaun Dewberry
, will be successful in his attempt to develop a far more accurate measurement technique for online radio.
It is really sad that the online environment, that has become so trustworthy in terms of measurement accuracy, has now been tarnished through some cavalier use of statistics.
It is equally sad that so many media buyers continue to rely on readership, viewership and listenership figures as though they were bibles without realising that trying to research human preferences, by asking the sort of direct questions media researchers have no option but to ask, inevitably results in a significant lie factor.
I am convinced that case histories, track records and the success or failure stories by advertisers on all types of media, are a far better way of marketing measurement. Using listenership, viewership or readership statics should be nothing more than a plan B at best.