Tag Archives: ratings

Why Did My Radio Station’s Ratings Go Down?

“It was only one or two meters!!”

Whether you’ve experienced poring over data in Media Monitors or visiting Arbitron’s offices in Columbia, Maryland back in the day to hand-sift through diaries (or like me, both), the frustration of radio ratings data inconsistencies is a tale as old as the measurement itself.

Generally, ratings tend to tell the story well when viewed over longer periods of time. But when you rely on quarterly, monthly, or even weekly ratings to make decisions that affect your programming, hold on buckaroo. You’re often in for a wild ride.

For me, the irrational rationalizing was the worst part as a program director on ratings day. OK, maybe it was the 30 minutes leading up to the release of the numbers when my stomach turned in knots. But it was the moments afterwards on bad book days, with the General Manager, Sales Manager, and maybe some other members of the programming team in my office offering theories.

“Well, you know, the morning show took the week off and so-and-so was in there by herself.”

“We ran the heck out of that contest. I guess it didn’t work.”

“Why didn’t those commercial-free sweeps connect?”

“The competition started playing one more current an hour.”

Guess, guess, guess, guess.

If you’re only looking at a ratings number, you simply don’t know what caused the drop.

It’s no different when you rationalize why you went up. There are exceptions, of course. A truly major change like a format flip or morning show departure can cause short-term ratings flux, but minor things typically don’t.

At its core, perceptual research is so valuable because of the sample size. When you’re talking with hundreds of listeners in your market that listen to your station or your competitors, you can both rely on the data and feel comfortable making programming decisions based on it.

And because data is impartial, the answers you think you’ll get aren’t always the ones you do. This is especially true when it comes to the answer to the question in the title of this blog. “Why did my radio station’s ratings go down?”

Attempting to answer that question shouldn’t be the only time you conduct perceptual research, but it certainly is a reason to. Over the course of being involved in many studies where that question is asked, the answer is not always that something is terribly wrong.

Sometimes, radio stations going through ratings struggles look remarkably healthy from a perceptual perspective. It could be a ratings sampling issue. And although it may not be much consolation to the sales department in the short term, it is helpful to know that the ratings declines in those instances are usually temporary. It is not unusual to see them bounce back. And thankfully, studies in instances like these offer programmers confidence to not go fixing things that aren’t broken, an instinct that may be opposite to the feeling they get when looking at a bad book.

And yes, sometimes things are in fact broken. Or the competition owns all the images you’re trying to win. Or not enough potential listeners know your station exists. Or a myriad of other issues we can identify. They’re all solvable challenges, sometimes painful, sometimes not. But at least you know the why behind the what and you can take the appropriate action to make positive change.

Guessing may work well for this guy:

 

Source: Syracuse.com

But it’s not so fun when you’re in charge of a radio station.

How to Move the Ratings Needle

Tuesdays With Coleman

Michael O’Shea is the President and General Manager of Sonoma Media Group in Santa Rosa, California. Many years ago, he told me something about how radio stations attempt to impact ratings that has stuck with me to this day. I’ll paraphrase a bit.

There are two numbers in the ratings share of every stationthe number to the left of the decimal (as in the 4 in a 4.3 share) and the number to the right of the decimal (as in the 3 in a 4.3 share). The number to the right is impacted by the things radio stations spend the vast majority of their time on. Tweaking the music. Adding or removing a talk break. Giving away concert tickets. These are the tactical things that may take a station from a 4.3 to a 4.5 or maybe a 4.7.

What moves the number to the left of the decimal point–that is, what gets your station to make big improvements in its ratings? Strengthening your brand. Major marketing. A big format debut. A morning personality crossing a threshold of impactful connection with the audience. Large, momentum-shifting, buzzworthy things. That’s how stations go from a 4.3 to a 5.3.

Recent history draws our attention to two momentum-shifting examples in politics. In 2016, Hillary Clinton had well-produced campaign ads, high-profile endorsements, and, seemingly, a victory well in hand. But it was Donald Trump’s ability to shift perception through consistent repetition that changed the momentum and the outcome of the race. He did not and could not have won if he had dealt with typical things candidates do; e.g., policy papers and carefully crafted messages to appeal to the voters in the middle.

More recently, few expected Joe Biden to emerge as the 2020 Democratic candidate. Again, it wasn’t a snappy ad or one-liner at a debate that changed the game. Biden utilized a groundswell of support in South Carolina to shift perception of his electability.

Rather than just managing the minutia, I’d like to see the radio industry focus on impacting the public conversation.

Is this more challenging than ever? Yes. Does ratings compaction, particularly in PPM markets, make impacting the number to the left of the decimal point even more difficult? Absolutely.

If I owned or managed a radio station today, I would hire a marketing specialist specifically charged with getting media coverage. I’d make it a mission that my morning show would be such market authorities on pop culture and music that other media outlets would look to it for leadership. Last Friday morning, Charlamagne Tha God from The Breakfast Club, the morning show based at Power 105.1 in New York, interviewed Joe Biden. As usual, the show posted the interview on social media (The Breakfast Club has 4.4 million YouTube subscribers) and on its podcast. Towards the end of the interview, Biden says, “If you have a problem figuring out whether you’re for me or Trump, then you ain’t black.” This led to a controversy over his comments, regarding whether or not he is taking the African American vote for granted.

Sure, The Breakfast Club has massive reach now through its many channels, but the syndicated Urban juggernaut started as a local morning show ten years ago. It did not build a following and its influential sphere of influence by mirroring the template of other morning shows. The Breakfast Club made interviews a core part of the show design. Guests know that, as The New York Times writes, “No one who enters the studio or, now, joins a video call with any member of the hosting trio is safe from commentary and criticism.” The Breakfast Club calls itself “The World’s Most Dangerous Morning Show.” Safe companionship may be just fine for some morning shows. But The Breakfast Club knows even the chance something controversial and real could happen at any time is what creates lasting buzz and loyalty.

If your promotions staff spends too much time concerning itself with the prize closet or database emails, maybe it’s time to refocus. Maybe now, while there are no remotes, is as good a time as any.

The reason we track brand perception in our research is that perception is what matters. It’s what’s always mattered and always will.

Worry less about minutia. Make big, strategic brand decisions. Control the conversation. Change perceptions. The number to the left of the decimal point will follow.

Preparing for Daily Radio Ratings

Tuesdays With Coleman

One of my favorite Facebook features is Memories, which allows me to start most days with reminders of life events I shared in years past. A few weeks ago, I woke up to reminders of a great business trip I took across Canada ten years ago.

On that trip, my colleague John Boyne and I delivered breakfast presentations on four consecutive mornings in Vancouver, Calgary, Edmonton and Toronto at the invitation of NLogic (then known as BBM Analytics), the software arm of the Canadian ratings service. Its president asked us to share our early learnings about PPM in the United States just before the audience measurement service was rolled out in his country.

I bring this up because a few weeks ago I had the opportunity to see the “next big thing” when it comes to PPM and, as a result, many of the items John and I covered in those breakfast presentations are worth revisiting.

This “next big thing” is coming this month from Media Monitors and its name says it all: Audio Overnights. Yes, it’s true, after making the leap from quarterlies to monthlies to weeklies, the radio business is about to join the world of “dailies.” This means that after constantly reminding our clients in PPM markets that “It’s only a weekly,” we’re now going to have to hold their hands through the ups and downs they will experience as they download the ratings from yesterday onto their computers.

Media Monitors

I am not going to use this week’s blog to rehash Jon Coleman’s landmark “Top Ten Things to Do as a New PD in a PPM Market” article (although, if you want to remind yourself of its teachings, I invite you to review the piece here), which encapsulated much of the material we covered in our presentations to Canadian broadcasters. Instead, I am going to focus on four of the items in Jon’s article that address the changes most stations see in their PPM performances on a short-term basis.

One of Jon’s ten “things” is the need to understand how PPM works and that it—like all research—is prone to statistical wobble. This will be especially true when we start looking at PPM data on a daily basis, as it will be possible—likely, in fact—that there will be occasions where your audience will grow from Wednesday to Thursday and the daily data will tell you the complete opposite. Thus, it is important not to fixate on individual days; what you must do instead is look for longer-term trends in daily data before you start to raise questions about a station’s performance.

Another point is that while programmers—thanks to some extent to tools that have been introduced by Nielsen Audio in recent years—have a better understanding than they used to of panel dynamics, they will need to recognize that panel behavior will have a huge impact on daily data. We usually talk about panel dynamics in terms of respondents entering and leaving a panel, but when we look at daily data, we will experience the impact of panelists dropping in and out of in-tab daily. You can already envision scenarios where a panelist who is a reliable contributor of quarter-hours of listening to a station experiences a life event that prevents him or her from carrying their meter—or, less dramatically, that causes a break from his or her usual pattern of listening—on a given day and the impact that this will have on the daily numbers.

Portable People Meter

Just as we discourage our clients from obsessing over weekly or even monthly PPM data, we feel this is even more important once Media Monitors delivers Audio Overnights to its customers. Avoid downloading the numbers every day and don’t make an event out of it when you do. Instead, look at a bunch of individual days’ data at the same time and watch for patterns by aggregating the data. Outside of when there was a major event that you would expect to drive a big spike or decline in listening, don’t lose the forest for the trees by hyper-focusing on data for an individual day.

Lastly, evoking one of our favorite philosophies about research, avoid confusing correlation with causation. The former is when your ratings go up or down at the same time as you or a competitor made a change and you incorrectly assume that the numbers reflect the impact of that change. It is only through other research that gives you more insight into the hows and whys of listeners’ behavior that you can connect the two with confidence.

I am not going to pass judgment on the introduction of Audio Overnights; they’re coming and we will be prepared to help our clients interpret the data. With that said, I am confident that programmers who follow the tenets of Outside Thinking and understand how consumers make the decisions about what to listen to when will be the ones who will not obsess over daily data and use the tool correctly as a guide that will help them raise the right questions about their station…and not as an answer for why their stations perform as they do.

Why Successful Radio Stations Need Research

Tuesdays With ColemanThis week, we continue with part 2 of a blog series that revisits a column I wrote for Radio & Records in 1999 found while digging through the Coleman Insights archives.

Radio and Records

Despite significant changes in the industry over the past 18 years, I’ve been struck by how little some things have changed. I discussed four scenarios in which radio managers fail to get the full benefit of conducting research on their stations. This week, we’ll put the spotlight on:

Scenario 2 – We Have Great Numbers: A program director dismisses any need for conducting research on her station by citing its performance in the latest Nielsen book.

There is no question that Nielsen represents “the bottom line.” I worked for Arbitron, which was acquired by Nielsen in 2013, for six years and experienced how significantly the company’s data impacted radio stations first hand. This experience—and my contact with the company since I left there in 1993—taught me a great deal about the quality of the information they provide, even if there is always room to improve it.

Using Nielsen to assess how your station is doing, however, is not a very good use of their data. In fact, it is downright dangerous. Listener appetites and the competitive landscape can change so quickly that what Nielsen reported in the fall book might have little bearing on the winter results. It’s also been my experience that ratings performance and the strength of your position are not perfectly correlated.

Let’s play out a hypothetical situation. Let’s say you’re in charge of an Adult Contemporary station, currently number one in your target demo. When it comes to music, you’ve got a lot of room to play with and your playlist covers various segments of the format spanning four decades. Your morning show gets decent numbers, but at-work listening is where your station really shines. With things going so well, it would be easy to say, “Why do research”?

There are a couple of things that tend to remain constant in radio. First, you probably won’t stay on top forever. Second, if you are at or near the top, a competitor will likely try to slice into your success. What if you knew, with a great deal of clarity, where your own strengths and weaknesses lie? What if you knew the strengths and weaknesses of other stations in the market? What if you knew if the musical tastes of the market were changing? What if you knew that the awareness level and appeal of your morning show wasn’t as high as you expected? What if you discovered the broad nature of your playlist made you vulnerable to a more focused attack?

Would you rather have answers to these questions before your radio station is attacked?

A perceptual study—at Coleman Insights, it’s called a Plan DeveloperSM—can  help shore up your fort before it is too late to discover your vulnerabilities. Perceptual research can also discover what your brand stands for in the marketplace.  Stations with strong brands are far more equipped to withstand ratings wobbles than those without. Focusing on the next ratings book is a short-term strategy.  Focusing on research that builds your brand is a long-term approach built for long-term success.

What kind of research should you do and is it giving you the results you need? Furthermore, are the results being interpreted properly as part of a larger brand strategy? Next week, we’ll focus on Scenario 3 – Confusing Tactical and Strategic Research.