“It was only one or two meters!!”
Whether you’ve experienced poring over data in Media Monitors or visiting Arbitron’s offices in Columbia, Maryland back in the day to hand-sift through diaries (or like me, both), the frustration of radio ratings data inconsistencies is a tale as old as the measurement itself.
Generally, ratings tend to tell the story well when viewed over longer periods of time. But when you rely on quarterly, monthly, or even weekly ratings to make decisions that affect your programming, hold on buckaroo. You’re often in for a wild ride.
For me, the irrational rationalizing was the worst part as a program director on ratings day. OK, maybe it was the 30 minutes leading up to the release of the numbers when my stomach turned in knots. But it was the moments afterwards on bad book days, with the General Manager, Sales Manager, and maybe some other members of the programming team in my office offering theories.
“Well, you know, the morning show took the week off and so-and-so was in there by herself.”
“We ran the heck out of that contest. I guess it didn’t work.”
“Why didn’t those commercial-free sweeps connect?”
“The competition started playing one more current an hour.”
Guess, guess, guess, guess.
If you’re only looking at a ratings number, you simply don’t know what caused the drop.
It’s no different when you rationalize why you went up. There are exceptions, of course. A truly major change like a format flip or morning show departure can cause short-term ratings flux, but minor things typically don’t.
At its core, perceptual research is so valuable because of the sample size. When you’re talking with hundreds of listeners in your market that listen to your station or your competitors, you can both rely on the data and feel comfortable making programming decisions based on it.
And because data is impartial, the answers you think you’ll get aren’t always the ones you do. This is especially true when it comes to the answer to the question in the title of this blog. “Why did my radio station’s ratings go down?”
Attempting to answer that question shouldn’t be the only time you conduct perceptual research, but it certainly is a reason to. Over the course of being involved in many studies where that question is asked, the answer is not always that something is terribly wrong.
Sometimes, radio stations going through ratings struggles look remarkably healthy from a perceptual perspective. It could be a ratings sampling issue. And although it may not be much consolation to the sales department in the short term, it is helpful to know that the ratings declines in those instances are usually temporary. It is not unusual to see them bounce back. And thankfully, studies in instances like these offer programmers confidence to not go fixing things that aren’t broken, an instinct that may be opposite to the feeling they get when looking at a bad book.
And yes, sometimes things are in fact broken. Or the competition owns all the images you’re trying to win. Or not enough potential listeners know your station exists. Or a myriad of other issues we can identify. They’re all solvable challenges, sometimes painful, sometimes not. But at least you know the why behind the what and you can take the appropriate action to make positive change.
Guessing may work well for this guy:
But it’s not so fun when you’re in charge of a radio station.