Hi, Doc! Love the column. Hereís a good question for ya...Letís say you work for a station in a small market. You canít afford to do music callout research, although you do conduct annual auditorium testing for recurrents and gold. Iím trying to phrase this question to make sense . . . For example, letís say that your station is in Upstate New York, (this is hypothetical, so you can post the location). How wise would it be to look at what stations in your region are doing? If you know that Syracuse and Rochester do weekly callout, and youíre in Utica, how strong are regional similarities? If "Iím a Slave 4U" by Britney tests there, will it probably test well in your city? I hope you can make some sense of my question and shed some light on this. Thanks! - Anonymous
Anon: Hi to you and Iím happy that you enjoy the column. I think your question is well phrased and Iíll try to give you an equally well phrased answer. (By the way, nice set up telling me that your question is a "good" one. How can I argue that?)
Before I get to the answer, Iíd like to say this: If you have been reading this column for a long time, or if you know me personally, you know that I advocate research for almost every decision because the information helps decision-makers make better decisions. The goal is always to: (1) find out what the listeners want; (2) give it to them; and (3) tell them that you gave it to them. This is the 3-step approach that works in any business or personal situation, not just radio programming.
But I do realize that not all radio stations can afford to conduct research for every single question. I also know that research should be used as a guide for decision-making, not as a bible to rule everything on your radio station. So, in many cases when you canít afford research, you do the best you can to get as much information as possible and then rely on your talent, experience, and gut feeling.
OK, so youíre asking me to assume that I work for a small radio station in a market that is near other markets where similarly formatted radio stations conduct callout. What would I do? Well, before I can explain what I would do, I need to explain what I already know about the situation. And that is:
I do know that it isnít wise to use another radio stationís research, whether itís a perceptual study or music research. However, I do know that the differences that do exist are greater when different regions of the country are compared than the results from a few markets within the same region. That is, differences among markets that exist in a specific region are not as great as the differences from, for example, the northeast to the southwest. Therefore, if youíre going to "steal" and look at another radio stationís research, itís better to "steal" from radio stations close to you rather than across the country. (For example, if you donít live in California and use information you find from radio stations in LA, youíre making a big mistake.)
I do know that some artists, and Iíll use Britney here as you did, test well regardless of the market or region of the country. Yes, I know that they will be a few markets somewhere in the country that donít match the "norm," but most will. Therefore, if youíre going to take a chance playing music, itís best to take a chance with the most popular artists. The odds are that your decision to play these artists will be the correct one.
I do know that relying on "national" callout or "national" playlists makes no sense. The data are nationwide averages and these averages will not relate to my radio station, except in the case of some very popular artists (as already mentioned).
OK, so what would I do?
First, I would try to convince my management to conduct callout research. I would conduct a relentless campaign and become a pain in the neck. The reasoning I would give is that the music is "only the product" of the radio station and I need help from my listeners to tell me which songs I should add to the playlist. If management still doesnít cough up the money, I would tell them that Iíll do the best I can (donít blame me if I fail) and then go to the next step.
(You might not be able to do this step if you donít have a statistical background.) I would conduct a factor analysis and a correlation analysis of my auditorium music data to see if I can find any underlying common element(s) among the songs that my listeners like. When I know this, I can compare the sound/style/type of any new song that might come to my attention. This process isnít exactly the same as asking the listeners to rate each new song, but I will have a foundation I can use to help with my decisionóthe listeners are helping me by "proxy" from their auditorium music ratings.
Next, if there are a few other radio stations in my area that are similarly formatted, I would conduct a very thorough analysis of the radio station to determine its exact target audience. If Iím going to "steal" information, I want to make sure that I "steal" from a radio station that is close to mine in reference to its target audience. In other words, if Iím a CHR, I wouldnít just look at any CHR for informationóI want one that is after the same target Iím after.
I would also closely monitor what the top 20 CHR radio stations in the country are adding to their playlist. (Iím not going against what I said before about using research from another region.) But I would look at what they have in common in terms of new music adds just to make sure Iím being exposed to a variety of information. In other words, I canít assume that the information I "steal" from one or more CHR radio stations in my region is good information. They may be doing everything wrong and if I just use their information without some type of verification, I would become as bad as are. If I "steal," I want to make sure that what Iím "stealing" is good stuff.
I would do everything I could to get free information from my listeners. I would ask them questions about the music when we do a remote, or when they call the radio station, or when Iím at a social function. I wouldnít take any of this information alone as a vote for or against what Iím playing on the radio station, but I would be able to collect some feedback that may lead to some general conclusion that I can follow up on in an auditorium tack on questionnaire.
So thatís it. Without the luxury of having my own primary research to help me make decisions about adding currents, I would collect as much secondary research as I could. I would then put all this information together to determine if it fits with my goals of the radio station; if the information fits with what I already know my listeners want to hear.
By the way, you should have noticed that every time I used the word "steal," I put the word in quotes. I did that for a reason. I do not advocate stealing anything in any shape, manner, or form. "Stealing" in my answer means gathering information only from what is available over the airwavesópublicly available information, or what may be printed in national publications such as R & R.
Reliance on Research?
Why does it appear that some PDs treat their research reports as Holy Scripture? Just about everyone is doing market research and sampling the same pool of people, yet are obviously coming up with different results, else they would all be playing the same tunes at the same time. What ever happened to knowing your audience? When did the consultants take over radio? I would think a good PD would take his research and compare that with his experience in that market to decide what to do next. Yet, I keep seeing the report become the policy the next day. Is this just laziness? Is it indicative of a PD that doesnít really know the market? Is it the fear of making a mistake? - Zewd in Big D
Zewd: You raise several points in your question and Iíll do the best to address each one.
First, you ask why some PDs treat their research reports as Holy Scripture. Iím not sure if I would use the same document as a comparison, but I donít see anything wrong with a PD considering the radio stationís research as a bible (small letter ďbĒ) of what the audience wants.
In addition, a good PD (and there are many good PDs), is directly involved in developing the questionnaire or other measurement for any research project. This involvement, along with the hope that the study was conducted by a competent researcher, provides the PD with specific answers or directions for and about the radio station. A good PD who is involved in the research and relies on the research data as a bible is not a lazy person. This is the sign of a PD who knows how important it is to find out what the listeners want. And this addresses your question, ďWhat every happened to knowing your audience?Ē Research is the way to know your audience and good PDs use it for that purpose. (By the way, Iím not assuming that ALL PDs know what theyíre doing and Iím sure there are some who use the research data as a crutch.)
When did consultants take over radio? I know this is semantics, but I donít know any consultants who conduct researchóthey use information collected by researchers. Whatever the case, it may be that some consultants have ďtaken overĒ radio, and if that is true, then itís the fault of the radio stationís management, not the consultant. Most of the consultants I know go only as far as management allows them to go. If a consultant ďtakes overĒ a radio station, then the management gave the green light.
Finally, you say that, ďI keep seeing the report become the policy the next day.Ē While I donít think that all PDs are equal, if a research study was designed and conducted properly, then it should provide the information to help make policy for the next day or week or month. You may be referring to people who donít understand research and blindly follow the results. If so, I agree with you. That is not the purpose of research. The information collected in a research study is designed to help make decisions, not make the decisions alone.
In summary, I understand your points and can only say that all PDs and GMs are not alike in terms of their talents and you canít expect every management person to behave in the same way. There are some good ones, some bad ones, and some ugly ones. Hey, I think that might be a good title for a movie. . . you know, ďThe Ones.ĒÖ.I mean, well, never mind.
Hi, Mr. Wimmer: Here in Colombia, South America, we are used to doing remotes with FM transmitters (or RPTs, as we call them) we have in our mobile units. But there are times, especially when we are part of a 12-station cluster in the market, that we donít have enough frequencies available for simultaneous transmissions. Obviously, we want the best sound possible. What other remote equipment do you recommend for remotes? Which are the ones most radio stations use in the US? Thanks a lot in advance for your help, and keep up the excellent work! - Anonymous
Anon: Welcome from Columbia and Iíll try to keep up my work on the column. Iím glad you enjoy it.
You are out of my area of expertise with your question, so I sent it to Paul Douglas in Atlanta (Cox Radio Syndication). He said:
ďMost radio stations use ISDN lines for broadcast quality if itís a full broadcast remote. Comrex is a company that sells a variety of different types of equipment. In your case, a simple ďmatrixĒ box will probably work, unless music is being played from the remote location (which is hardly ever the case). Click here for Comrex.
Remote Equipment Comment
Doc: A few days ago you answered somebody from Bogota, Colombia about how you in the States use the remote to link from one place to the radio station and you gave him a good information about this topic. Can you help me please? Thanks. - Dondon
Dondon: The topic youíre referring to was using cell phones for remotes. You can read the questions and answers on the Research Doctor Archive by clicking here and going down to ďCell Phone RemoteĒ: Remotes.
Based on your wealth of knowledge, what is the general attitude listeners have for remote broadcasts from commercial clients? What are the obvious irritants, if any? Thanks. - Anonymous
Anon: Remotes are very much like contests: While a small group is very interested in remotes, the largest group is apathetic (they can take them or leave them). The only negatives I can recall over the years is when a remote (just like a contest) takes up too much time of the normal programming listeners are accustomed to hearing.
You need to check this with your listeners. However, the biggest value I have seen is with a small group of radio station groupies and the advertisers who pay for the remote.
Remotes and Playlists Revisited
You have said that remotes only appeal to groupies and the bottom line. If done in high-traffic areas, though, canít remotes increase exposure for the station and get visibility to an audience that might not be exposed to them otherwise? Iím thinking about broadcasts at malls, amusement parks, the local teen hangouts, etc.
I read elsewhere on All Access that research has been done that indicates that cume continues to go up the smaller and smaller you make your playlist. Is song burn the reason why some listeners donít stay with a radio station that has a short playlist? Isnít there a range where (for example CHR) a radio station would begin to see diminishing returns?
I find it hard to believe a station could generate a 50 share cume with a 7-song playlist. Thanks again. - Gene
Gene: What I said was, "Remotes are very much like contests: While a small group is very interested in remotes, the largest group is apathetic (they can take them or leave them)." I did not say that remotes are not good for exposure. Comments from listeners indicate that remotes are good for radio stations, but in most cases, the exposure is accidental because of attendance at a specific event. That is, the majority of listeners do not say that they attend events specifically to attend a radio stationís remote. This does not mean that remotes are useless. They arenít. They provide much-needed exposure for the radio station.
About your short playlist questions . . . I donít know which study you are referring to on All Access that indicates that cume increases with a shorter playlist. Before I can make any comments about that, I would need to see the study (which would have to include a description of the methodology, or how the study was conducted). If you can get it to me, Iíd be happy to review it.
And this answers your question about diminishing returns. I havenít seen a scientific study about this. The only information I know about this relationship are comments from people who say something like, "Our Arbitron numbers went up when we trimmed the playlist." That is not a scientific study. Those are only opinions.
Remotes and Playlists Re-Revisited
It seems as if I have egg on my face twice. I indeed read into your comments regarding remotes. Iím glad you clarified. "Everybody" (another unqualified example) seems to be saying, "Dump remotes, they just clutter up the air and donít benefit your station." I would be willing to bet BAD remotes, that is to say dull, 10 minute at a time sales pitches, certainly would harm your station! However, if you are simply doing your talk breaks from a visible location, seems like it could do nothing but help you.
Regarding the studies on playlists, it was NOT from All Access, which explains why I couldnít find it. It is at: www.stlmedia.net/pages/rotate9.htm, STL Media, where Mike Anderson says, in part:
"It really is true. If you play the best hits over and over, your cumulative audience (the Ďcumeí is similar to a newspaperís circulation figure) will increase. There are even mathematical models to prove the theory. In fact, a few years back, I took the math to the extreme and, using one of the formulae, showed that the tighter the playlist the higher and faster the cume would grow.
Of course, by the time we got the total playlist down to five or fewer songs weíd theoretically acquired 100% of the total radio listening universe in the market. Absurd? Sure. But the fact remains that, for playlists, tighter is better."
I guess I had better write him to find out what these mathematical models are based on.
Thanks again for a fantastic column and service to radio folks everywhere! - Gene
Gene: Youíre welcome for the column. I hope you learn a lot. I do.
I read the brief commentary by Mr. Anderson. My only comment is that I need to see the formula he used for his computation. I gots ta see the formula.
Your last comment about remotes ("Öseems like it could do nothing but help you.") may be correct. Iím not sure. You would have to ask your listeners. You need to find out the amount of importance your listeners attribute to remotes. If remotes are important, then the time and money are worth it. If the listeners donít think remotes are important, then you could save the time and money.
Your research and the research of others says that radio listeners get tired of the same things over and over. You have commented on this many times here.
Since we started doing callout research, we found that it can take a long time to build familiarity for some songs and a long time for the songs to burn or get lower scores. We also find that when we move some of our music from a current category to a recurrent category familiarity goes down below 90%. Have you found this to be true in your research?
If so, how can a station balance? You either have to play the crap out of a song until you kill it, or try to extend its life by decreasing rotation and risking lowering the familiarity.
While I sympathize with listeners' feelings about radio's repetition, repetition seems necessary to build familiarity and passion for the music. Could it be that too many stations simply play the wrong songs too much? I await your wisdom. - Anonymous
Anon: You await my wisdom? Let me know if it arrives. I have been waiting for a long time.
Before I get to a discussion about repetition, I need to address something you mentioned in your question. You said that, "We also find that when we move some of our music from a current category to a recurrent category familiarity goes down below 90%." That comment prompted several things:
Logic suggests that a song isn't familiar one day and unfamiliar another day, so there must be something going on. Something doesn't make sense. I can understand a drop in the ability to name an artist or title if a song is played less often, but I don't understand a drop in "familiarity" with less airplay. Someone might argue that your listeners have poor memories, but I don't buy that.
I don't know how far below 90% the familiarity dropped, but my guess is that the drop is within your sampling error margin and the numbers are actually flat. You should not interpret callout results (or any other research) without considering sampling error. If you're using a sample of 100 respondents, a song that receives 90% familiarity actually falls somewhere between 80.2 and 99.8. (Theoretically, a sample of 100 has a maximum sampling error of Ī9.8% at the 95% confidence level, although your error is less because your screener reduces variance in the respondents, and the type of test (repeated measuresórespondents rating several songs with the same scale) you're conducting. My guess is that your sampling error is somewhere around Ī7.0, so let's use that in an example. This means a song with a 90% familiarity actually falls somewhere between 83% and 97%. In other words, a change of only a few percentage points (positive or negative) means nothing. You need a change greater than 7% to say that familiarity is up or down.
You don't mention what type of sampling you use for your callout, but a statistically different percentage may be due to the fact that you're using different respondents for each callout. Even if you replace 25% of your sample for each report, you must still expect some fluctuation in the data (that's why you need to interpret your data using sampling error.) You must expect changes in percentages because you're using different samples, and each sample has its own sampling error.
If you said that you're using a panel (the same respondents for several reports) and the familiarity dropped, then there is something else going on. The something else would lead me to check the methodology of your study, or how your callout is conducted. Sampling error isn't the only error involved in research. There is also measurement error (including how data are recorded, entered, and analyzed) and random error (things you can't control such as respondent fatigue or misunderstanding the scale).
I have a feeling that if you converted your data into Z-scores you wouldn't find any wile fluctuations unless they are really there.
My hunch is that you really don't have a major problem, but I can't really say unless I see your data and understand your methodology. However, my guess is that your familiarity percentages fall within the margin of error associated with your sample size.
Look at your methodology, don't interpret your numbers without considering sampling error, and convert your data to Z-scores. Now, on to repetition.
Music repetition is probably one of the most perplexing problems in radio programming because, as you say, there is a fine line of wanting to play familiar songs without being hit with repetition complaints.
Repetition isn't an easy concept to deal with because it means different things to different listeners. It is a situation created by individual radio stations, but also (as you say) because listeners often tune to several radio station and hear the same songs played (which then converts to, "My favorite radio station play the same songs too often.")
Playing familiar music is part of radio programming. However, you must look at criticisms about repetition very carefully. All too often, a research study is conducted that simply asks something like, "Which radio station, if any, repeats songs too often?" The radio stations that get named a lot are criticized. But that's probably not the best approach to take. What's important is whether listeners change to another radio station (or to off) if you repeat your songs too often. And that question usually isn't asked. The question should be something like, "Which radio station, if any, do you listen to less because it repeats songs too often?"
That's the important question. I have found that many people complain about repetition, but they don't change to another radio station. You can't simply take the complaint as a directive to you to change your playlist. You must find out what your listeners mean if they complain about repetition. You also must find out who is complaining. Does the complaint come only from high TSL listeners or from all types of listeners? What ages are these complainers? And so on.
In other words, repetition may or may not be a problem. You can't merely take "they repeat their songs too often" as a complete negative. You must find out.
Repetition - 2
I have always believed that you want to limit the number of songs in a category, but in the market I'm in, an unrated small town of about 35,000, we seem to have a higher TSL than what you'd see in a larger market. So, my question is: If I increased the number of songs in a category from 100 to 135, it would increase the rotation from 29 to 53 hours. Would that be too big of a jump? - Anonymous
Anon: My guess is that if I suggested asking your listeners, you would say your radio station can't afford research. (By the way, "larger" market radio station managers say the same thing.) OK, so let's see what we can do here.
First, you say that you're in a town of 35,000 and, "we seem to have a higher TSL than what you'd see in a larger market." I don't know if that's true since I don't know where you are and I don't know your TSL, so I can't address this point in any detail.
I think we need to look at your question from a different perspective. Let's assume that you're a "civilian" in your market (you don't work for a radio station) and you listen to your favorite radio station every day (the favorite in this case is your radio station).
Every day you turn to your favorite radio station and listen for several hours, maybe even a few times a day for several hours (morning show, afternoon drive). How many times each week do you think you will hear the same song? Just pick a song as an example. In seven days, will you hear it 12 times? 7 times? 5 times? I'm not sure, but take a guess since you know your playlist.
As a civilian listener, do you think you would know if your favorite radio station added another 35 songs to a certain music category? If you didn't hear a song (the one you picked above) 12 or 7 or 5 times each week, would you be happy or sad about that? Would you look for another radio station that played songs more frequently, or would you stay with your favorite?
If your favorite did add another 35 songs, would you then perceive that they play a wider variety of music? Or would that mean that your favorite didn't play the songs you like as often as you would like to hear them?
I know I'm answering a question with another question, but if you look at your dilemma with this approach I described, you might come up with some type of answer, but keep in mind now that I don't like this approach. The only approach to get the answer is to ask your listeners.
Why do radio people disregard this very important information (song repetition)? This is one thing that is killing our industry. Is ego more important than quality? - Anonymous
Anon: Any discussion about song repetition on radio is one that will never end and will have supporters on each side. Some say song repetition is what radio needs because people want to hear their favorites all the time. With this in mind, the playlists are cut way down. Others say that song repetition is not good because it pushes people away.
The complaints about repetition can be cause by a number of thingsÖTSL, listening to several radio stations that play the same music, short playlists, and finally, songs that sound the same.
Without going into pages and pages here, I believe the song repetition phenomena is not addressed for two reasons: (1) the problem is too complicated; and (2) many radio people say that radio stations with short playlists do better than those that have long playlists.
Regardless of the reason for an unwillingness to talk about the subject, the fact is that I hear radio listeners say that they don't listen to radio as much as they would like to because they always hear the same songs. I can only report what listeners say, and that is what I hear.
What is the impact of playing requests? In your research, have you found stations that emphasize playing requests perform differently in Arbitron? Is it broken down into doing well in certain Arbitron areas like just TSL? Would a station do any better or worse if they didnít play requests? Do listeners even care? I should clarifyÖI donít mean do they care if their requests are played, but do they care whether a station offers the continued possibility of playing the request or not? - Anonymous
Anon: As you probably know, and I have addressed this a few times in this column, a radio stationís Arbitron numbers are the culmination of the influence of dozens of variables, not just one or two. In order to determine if requests alone have an effect on a radio stationís success in Arbitron, a highly controlled study would need to be conducted.
The study would have to look at two radio stations that are exact in every way except that one station takes requests and the other station does not. Thatís the only way to do it, and that type of study is virtually impossible to conduct.
In the meantime, the best thing to do is listen to what the listeners say about requests. And this is what they say:
Requesting songs on a radio station is more important to younger listeners than to older listeners. ďYoungerĒ means people who are under 25 years old (somewhere around there). Young people are more likely to think that itís cool to call a radio station, talk to the jock, and request a song to be played. Older people really donít care much about this.
The people who are likely to request a song think itís a good idea for a radio station to ask its listeners what they want to hear.
I havenít seen any negatives in research for radio stations that take requests. But I also havenít seen that this practice alone is a reason to listen to a specific radio station.
Bottom line: A radio station that takes requests may not necessarily perform better in Arbitron, but there is no evidence to suggest that taking requests from listeners should not be part of the radio stationís operating philosophy.
Requests by Daypart
Hi Doc! In 2003, two of the radio stations I can receive here have flipped music formats. One of them did two public listener surveys whose results were also published on their web site. However, the resulting format change (as expected) didnít quite match the survey results. Most notably, they still donít play some Austrian titles that scored very highly and some Classic Rock classics such as ďBorn to be Wild,Ē ďSatisfaction,Ē and ďStairway to Heaven.Ē
Instead, they still rotate their top current songs every 4 hours although the current songs didnít score highly in the survey at all. (But I see they are getting many requests for those outside of the survey.) Anyway, they also have segmented their program somewhat, playing some hours of slow love songs at night, and a 60s/70s/80s special on Friday and Saturday nights, also featuring classics and lesser known songs (and maybe those from the survey I miss during their ďnormalĒ programming).
This leads me to the question: Are certain songs appropriate for certain times of day as a general understanding, or is that solely a decision of the PD or MD of the station? Specifically, if you look at listener requests stating that songs be played at a specific time of day, do genres or types of songs the listeners want to hear vary with the time of day they want to hear the music at, or are there no significant changes of preference such as this? Thanks in advance and a Happy New Year to you! - Kurt from Austria
Kurt: Happy New Year to you too. Itís nice to hear from you again. On to your questionÖ
First, I assume that the radio station did a music test on its website since you said it ďalsoĒ published the results on the website. Assuming thatís true, then the folks at that radio station need to realize that such a test is no way to get information to program a radio station. Why? Because there is no way to know who answered the website survey. If they did make changes on the radio station based on the website results, then they made a mistake.
SecondÖyou ask about dayparted songs. Yes, listeners do like to hear certain types of songs at different times of the day, but the problem is that almost everyoneís preference is different from everyone else. In words, you may want to hear Born to be Wild in the morning, but someone else may think that the song is too ďhardĒ for the morning.
Dayparting songs is a good idea, but itís virtually impossible to find a consensus among a radio stationís audience. There are just too many differences. In addition, itís difficult enough to collect listenersí ratings of songs and, in my opinion, trying to play songs by daypart goes beyond the Occamís Razor approach (the simplest approach is the best).
Do you understand my point? Programming a radio station is a complex process because it involves trying to satisfy a wide variety of listenersí likes and dislikes. The folks running a radio station have enough problems already trying to find what their listeners want to hear. With a music radio station, if you add another layer information related to ďsongs by daypart,Ē then I think itís taking the programming process to a place where it doesnít belong.
That doesnít mean that I donít think itís a good idea. I think songs by daypart are fine. However, I donít think itís a good idea to try to get the information from listeners because there are just too many differences. I know this because I have tested this idea in the past. I asked respondents to identify when a song should be played during the day. There were no conclusive results for any song and the tests were a waste of time. If there is a desire to play songs by daypart, I think and experienced PD or MD should be able to make these decisions.
One additional point that may show the difficulty of asking listenersÖWhat would you do with a song where 50% of the listeners say it should be played in Morning Drive and 50% say it should be played only at Night? The purpose of research is to find answers for questions. The purpose of research is not to create more confusion.
Click Here for Additional R Questions
All Content © 2015 - Wimmer Research All Rights Reserved