Phantom Cume

Would you explain phantom cume? - Anonymous


Anon:  Phantom cume is the number of listeners (usually shown in a percentage) who listen to a radio station, but fail to name it in an unaided listening question or in their Arbitron diaries.


Phantom cume can’t be computed for Arbitron ratings because the sample is unknown.  However, in perceptual research, there are four things required to compute phantom cume:

  1. Unaided cume (number of respondents).

  2. Aided cume (number of respondents).

  3. Subtract unaided cume from aided cume.

  4. Divide additional listeners by the aided sample size.  This is the phantom cume percent.

For example, assume that 30 respondents name WAAA in an unaided listening question (“During a typical week, which radio stations do you usually listen to?”).  In an aided question (“During a typical week, do you usually listen to WAAA?”), assume that 50 respondents said “yes.”  WAAA picked up 20 listeners from the unaided question, and its phantom cume is 40% (20/50 = .40).


Phantom cume is “top of mind” awareness, and when it comes to research and Arbitron ratings, it means lost credit for listening—earned credit that isn’t counted.


Phantom cume can never be eliminated, but it can be reduced by call letter/moniker mentions on the air, and external advertising and promotion.  Radio stations that don’t pay attention to their phantom cume usually have percentages in the range of 20-40%.  That’s 20-40% lost listening credit.  Radio stations that pay attention to phantom cume are usually in the single digit percentages.  (Anything under 10% is great.)


Phantom cume will no longer be a concern in Arbitron when the Personal People Meters replace diaries.  However, it will still be a factor in perceptual research.


One final thing…the concept underlying phantom cume (top of mind awareness) is not unique to radio.  It exists in almost everything, but another example in research may help.  Assume that you receive a call at home from a research company conducting a study about chocolate candy bars.  You’re asked to name which candy bars, if any, you ate in the past week.  You name them, and later in the survey, you’re read a list of several candy bars and asked if you ate any of them during the past week.  One of the candy bars is Snickers.  You didn’t mention that in the unaided question, but said, “yes” in the aided question.  There ya go—phantom candy bar eating.


In the past 20 years or so, I have found that most radio broadcasters don’t know much (if anything) about phantom cume.  But this isn’t surprising since managers in all businesses don’t understand the concept.  Most people think that “everyone knows about us,” and there is nothing further from the truth.


You can also encounter phantom relationships, phantom restaurant attendance, phantom car maintenance, and so on.

Pictures/Photos - Searching on the Internet

Doc:  I need help in finding photographs on the Internet.  When Rodney Dangerfield died, I went to the Internet to find pictures of him.  Although I found some, the search I did didn’t seem very good at finding things.  Is there an easy way to find photographs on the Internet? - Ray


Ray:  The best way I know to search for pictures is to use the “Images” option on Google.  (Click on “Images” when you get to Google.)


For example, if you enter “Rodney Dangerfield” (use the quote marks) in the search area, this is what you will find.  As with all searches, there will be a few irrelevant references, but most will be OK.


As I said, once you see the ability of this search option, you’ll probably search for all sorts of things—use your imagination to find photos of almost any person, place, or thing.  Here are a few examples (I don’t guarantee that all of the photos found will be suitable for young people, so I apologize in advance):


Vulpes Fulva


Radio Stations

Britney Spears



George Bush

John Kerry

Wilson Pickett


Mickey Mouse

Jimi Hendrix

Ouagadougou, the capital of Burkina Faso


Get the idea?  You can find photos for just about anything—and probably many pictures you don’t want to see.

Pierre (South Dakota)

What is the only U.S. state capital whose name doesn’t share any letters of the alphabet with the name of its state? – JW

JW: You must watch Jeopardy!  Your question was used as the Final Jeopardy! question on the show several years ago. The answer is Pierre, South Dakota (pronounced peer).

Pinch (as in Ocean's Eleven)

In the movie, “Ocean’s Eleven,” they used a machine called a “pinch” to blackout Las Vegas in order to rob a casino bank vault.  Couldn’t the U.S. military use this device in Iraq to knock out the radio and TV signals? - Anonymous


Anon:  Well, the pinch may have worked in the movie, but the problem is that the device was a theatrical prop.  The pinch, as shown in Ocean’s Eleven, was the product of someone’s creative imagination, not a real machine.


For more information, read this article on ABC news: Pinch Me.

Pirate Radio

What do you feel the chances are for a pirate radio station in the Midwest can make it on the air 24/7 without getting shut down by the FCC?  How long do you think they could keep it going, assuming they abide by all regulations, well except for being licensed, of course? - Fred


Fred:  First, there are no regulations for pirate radio stations.  They aren’t legal, so there aren’t any.


Second, I don’t know what the chances are for success.  If you check Google, you’ll find many articles about the FCC shutting down pirate radio stations.  However…


Third, I think the chances of success relate to: (1) How much the pirate station interferes with licensed radio stations; and (2) How many complaints the FCC receives about the programming on the station.  In other words, if you don’t interfere with other radio stations in the area and aren’t broadcasting some type of highly objectionable programming, a pirate station might be able to survive for a while.  (I’m not recommending that you or anyone else get involved in pirate radio.)

Pitching Up Music

Hey, Doc, love the column! We are in a dilemma. Our competition pitches up its music. We’ve asked the ladies in the office (who are in our demo) if they notice a difference between us (not pitched) and the other guys. They say they cannot, but everyone in programming can. Are listeners attuned to songs sounding fast on one station and slow on another? We’re seriously considering an increase in our speed. We wouldn’t go quite as fast as Station X does, but it would be enough to give the records a "kick." To make a long story short, is there any sort of history as to why stations started speeding up their music? In general, do audiences notice? What would you recommend for us? We’d love to do a study, but the GM won’t spend the cash. Thanks! - Anonymous


Anon: Hey Anon, thanks for the comment about the column!


I’m not sure exactly when music pitching started, but I recall that it was done for two reasons: (1) to cram in more stuff in an hour; and, (2) because a PD somewhere thought it was cool. ("It seems like if we speed up the music, listeners will like it more.") Neither of these reasons is valid in my book.


You say that your GM won’t give you the money for a research study to test the idea with your listeners. However, your own small test (not scientific, by the way) should give you and indication of what your listeners (who are non-programming people) might say. The ladies in your office do not notice a difference between your radio station and the competitor, and I can tell you from testing this in the past that listeners don’t either. Nor do they care. Nor do they think pitching makes a radio station sound better or more entertaining. Nor do they attribute any importance to the procedure. The importance is only in the minds of the programming people.


In other words, without any support for the idea, programming people developed music pitching. Music pitching was not developed because of listener demand.


Finally, music pitching tampers with the artists’ work. If he/she/they wanted their music to sound as if it were run on an anxiety-filled CD player, they would record it that way. My recommendation? Leave your music alone. In fact, use it to your advantage and do a promo that says something like, "We don’t mess with your favorite songs."

Pitching Up Music - 2

Your column is super—keep up the good work. I don't want to provide too many specifics about this question, as my competitors probably read your excellent work. Do you have any research data on ‘pitching up’ songs? As you are probably aware, professional CD players allow the user to increase the speed or ‘pitch’ of music. My station just recently decided to take the plunge and we are now pitching. Is this a good idea? Does the audience notice? One of our competitors pitches at 3% and it totally distorts the music. We don't think it's too noticeable and the sales staff couldn't tell at all. Thanks for your help. - Anonymous


Anon: Thanks for the comments about the column. The information I have seen suggests that most listeners aren’t aware of pitching—even at 3%. You notice it because you’re in the business and you listen to the radio more closely and on good equipment. The average listener often has background distractions and may be listening on a cheap radio.


As usual, however, I suggest that you ask your listeners.

Plagiarism Problems

Dr. Wimmer:  I'm an Associate Professor at a university in the United States and teach three undergraduate mass media courses.  I'm writing to you to alert you of a problem that you may wish to pass along to other college professors (I know your "Research Doctor" column is required reading at several colleges and universities, including my own classes.)


Here is the problem:  In all of my courses, I require my students to write some type of class paper as part of the requirements for the class.  In the past month or so, I became curious about some of papers submitted by a few students.  The topics and quality of writing just didn't seem to "fit" their educational level, so I did a little investigation.


I know you have "The Research Doctor Archive," so I did a search for a few of the topics discussed in class papers submitted to me.  For example, in one situation I searched for "Research Doctor" Heisenberg Indeterminacy and found that many of the comments in my student's paper were taken directly from your answer in your Archive.  I also found the same thing with a few other students' papers and, unfortunately, have had to take steps to address the problem.  (Incidentally, I'd like to commend you for your work on The Research Doctor Archive—it's virtually a mass media encyclopedia.)


As I said, I hope you will post this on your column so it may help other professors.  Here is my suggestion to anyone who teaches mass media courses and requires students to submit written papers: If you think a student's information may not be original, start a search with this approach—put "Research Doctor" in quotes as shown, followed by the topic you wish to search, such as "Research Doctor" radio ratings.  (I am amazed that I can include almost any mass media word or phrase after "Research Doctor" and find something in your Archive.)


Thanks for your time. - Anonymous


Doctor:  As you requested, I omitted your name and university.  I agree that the information isn't necessary in reference to your comments.  I have a few comments of my own:

  1. I'm glad you enjoy the column and the Archive.  Thanks.

  2. It's interesting to me that your comments arrived on the same day as another person's question about The Research Doctor Archive, in which I included a comment about using copyrighted materials.

  3. I have taught college for many years, and I'm always saddened when I encounter a student who feels that it is necessary to cheat to get by.

  4. I am also saddened that students think professors aren't smart enough to search the Internet (and other sources) to find information.

  5. I'm going to discuss this problem with Joel Denver and see if we can include a statement at the top of the column about using Research Doctor materials for personal use and the need to provide proper credit.

In the meantime, I'll say this—The information contained in The Research Doctor column is copyrighted by (Joel Denver).  The information contained in The Research Doctor Archive is copyrighted by Roger Wimmer.  This means there are limitations regarding the use of the materials.


If you plan to use any of the materials from the Research Doctor column or The Research Doctor Archive, I suggest that you read the information about "fair use" and copyrighted materials from the U.S. Copyright Office — click here.  However, I'll emphasize two points from that summary:

  1. The distinction between "fair use" and infringement may be unclear and not easily defined. There is no specific number of words, lines, or notes that may safely be taken without permission. Acknowledging the source of the copyrighted material does not substitute for obtaining permission.

  2. The safest course is always to get permission from the copyright owner before using copyrighted material.  The Copyright Office cannot give this permission.

Plastic Water Bottles

I heard a story that reusing plastic water bottles may cause cancer.  Is that true? - James


James:  Not true.  This is an example of how the media pick up on false information and spread it all over the place. has a great explanation of the plastic water bottle “crisis”….click here to read it: One Word – “Plastics”.

Playing New Music

I pose this to your radio peeps and record label peeps based on an earlier question:


There have been stations in our city getting C&Ds (Cease and Desist Orders) from labels for breaking music from Nirvana, the White Stripes, etc.  One of our record label reps told our Music Director that another station in town (an Alternative format) is arbitrarily picking songs off artists CDs to play in regular rotation.  For example, a No Doubt song that hasn’t been worked to radio yet.


While this becomes dangerous in the way of putting untested music on the air, it is kind of cool from the perspective of breaking music no one else is playing.  Since I probably know what radio people think of this, what do your record rep people think of this practice and what can they do about it? - Anonymous


Anon:  While radio people may think that playing music from new CDs may be cool, the record people have agreements, contracts, and plans for the release of their music, and don’t like the idea of radio stations jumping the gun (so to speak).  Their only recourse is to file cease and desist orders.  (That’s the public reaction.  I do think that record companies expect some radio stations to break the rules and hope that the attention boosts sales of the new CD.  But that’s just a guess.)

Play It, Say It - A Research Perspective

Doc:  I'm new to radio, so please bear with me if I ask something that is too simple or something that I should already know.   In the June 2 "Net News" posting here on, there was a story about Play It, Say It, where Dan Mason, President/CEO of CBS Radio, explained a new company policy for its radio stations where they would now start to tell the artists and song titles of the songs the stations play.  My question is:  Is there research that supports this decision or is that just something that CBS developed?  Thanks in advance. - RK

RK:  First, you'll notice that I edited your question a little.  I don't think I changed the meaning of your question, but let me know if I did.  Second, you should never hesitate to ask a question about something you don't understand or something where you just need more information.  That's how we all learn.  OK, on to your question . . .


You would like a research perspective on the "Play It, Say It" campaign, eh?  No problem, but you better get a 6-pack of your favorite drink because this isn't going to be a short answer.


When I conducted my first professional music radio research study in 1976, I included a few open-ended questions in the questionnaire.  One question was, "What do you like MOST about the radio station you choose to listen to most often?"  A second question was, "What do you like LEAST about the radio station you choose to listen to most often?"  A third question was for respondents who no longer listened to the client radio station or who never listened to the radio station, and asked, "Why [don't you OR do you no longer] listen to Station WAAA?"


There were, as you might expect, a list of answers from the respondents, but there was something similar among the three questions - they all included a variation of "tells/says the artists and titles of the songs they play."  Like most?  "They always tell the artists and titles of songs they play."  Like least?  "They never tell the artists and titles of songs they play."  No longer listen/Don't listen?  "They never tell the artists and titles of songs they play."  Hmmm.


Now, it doesn't take an Einstein-type person to figure out that telling the artists and titles of songs a radio station plays had some significance among music radio listeners, so I included the same three questions in the next few studies I conducted.  Guess what?  The artists/titles response showed up again.


At that time, I decided to develop a list of Programming Elements where respondents would rate the importance of each element on a 1-10 scale, where the higher the number, the more important the element was to them when they listened to the radio.  Over a few years, I refined the list to 10-15 basic Elements and always included the importance of "Tells the artists and titles of songs they play" for all music radio research studies.  And here is what I have learned during the past 35 years about the importance of the artists/titles Element . . .


Without exception . . . I'll say that again . . . without exception, the "Telling artists/titles" Programming Element is always in the Top 5 Elements rated by listeners, and is usually in the Top 3 Elements.  This is true for males, females, all age cells, all formats, all areas of the country, and all market sizes.  This has been documented in every music radio research study I have conducted that tested Programming Elements since 1976, and it was true in the study I just completed about one month ago.  By the way, I have also found the same information in the few thousand focus groups I have conducted for music radio stations.


Now, I'm not sure how many research studies and focus groups I have conducted (or reviewed) since 1976 that included testing the Importance of Programming Elements, but I know it's a few thousand (combined).  I think that after that many validations of the same information I can say without any hesitation (or exception) that "Telling artists and titles" is one of the most important Programming Elements for a music radio station.


OK, so what?  Well, the "so what" is that if this information has been available (and repeatedly validated) since at least 1976 (and I know I'm not the only researcher who has found this information), then why would any music radio station NOT tell the artists and titles of songs they play?  Good question.


As I have been saying since 1976, radio is basically a simple business not unlike any other consumer business and success is usually guaranteed when decision makers follow a three-step operating philosophy:  (1) Find out what the listeners want; (2) Give it to them; and (3) Tell them that you gave it to them.  That's it.  If decision makers find out what listeners (consumers) want and give it to them in a creative way, failure is not an option.


So why have so many radio stations (PDs, GMs, owners, consultants, etc.) not included "Telling artists and titles" as a primary Programming Element?  Generally speaking, I have heard two reasons: (1) The decision makers say something like, "Our listeners hear the same songs almost every day.  They KNOW the artists' names and song titles;" and (2) Telling the artists and titles is "too much" talk and doesn't "fit" with the radio station's programming philosophy of reducing or eliminating unwanted talk and/or clutter.  My answer to both reasons is: hogwash, and here is why.


"Our listeners hear the same songs almost every day.  They KNOW the artists' names and song titles."  Yes, that is true for some listeners.  But how many listeners at any given time have forgotten the artists and titles?  How many listeners tune to the radio station for the first time and don't know the music?  How many listeners tune in at the end of music set and wonder which songs just played?  Not one person on the planet can answer any of those questions.  If no one can answer those questions, then why guess?  That makes no sense.


In addition, why is it so terrible to tell the artists and titles?  Music is THE product on a music radio station, so why is it so terrible to tell the listeners information about the product?  Here is a good correlation . . . The TV game show Jeopardy has been on the air for most of the time since March 30, 1964 and has aired about 8,800 episodes.  At the beginning of EVERY episode since 1964, the host (now Alex Trebek) says the same basic rule, "Your answer must be in the form of a question."  If we use the artists/titles argument some PDs, GMs, or consultants (etc.) use, we would say that repeating the rule of  Jeopardy at the beginning of the show is unnecessary because EVERYONE knows the rule.  Well, the program doesn't skip the rule because the people who run the show NEVER know how many people will tune in for the first time.  So why guess and only give the rule occasionally?


And the same is true for music radio stations.  If music radio station listeners have been rating "telling artists and titles" at the top of the Programming Element Importance list for at least 35 years, they why would any PD/GM/consultant (etc.) NOT provide this information to the listeners?  Makes no sense at all.  None.  Zero.  Nada.  Zilch.


A radio station's content should not be designed to satisfy the likes of PDs, GMs, consultants, jocks, talk show hosts, or anyone else at the radio station.  A radio station's content should be designed to meet the likes of the listeners.  Case closed.


Telling the artists and titles is "too much" talk and doesn't "fit" with the radio station's programming philosophy of reducing or eliminating unwanted talk and or clutter.  Since 1976, I have never heard one radio station listener (male, female, all age cells, all formats, all areas of the country, and all market sizes) say that telling artists and titles is "too much talk" or "clutter."  Not one.  EVER!  The "too much talk" and/or "clutter" is the perception of the decision makers, not the listeners.  If listeners did consider artist and title information as "too much talk" and/or "clutter," they wouldn't rate "telling artists and titles of songs" so highly in the Programming Elements Ratings.


The decision to incorporate "telling artists and titles" isn't a difficult one, especially with 35+ years of research to support the decision.


In summary, "Telling artists names and song titles" has been rated in the Top 5 (usually the Top 3) in every music radio station research study I have conducted or reviewed since 1976.  The listeners . . . ALL listeners . . . want the information.  Listeners do not consider telling artists and titles as too much talk and/or clutter.  With information that has been consistent for at least 35 years, why would any decision maker NOT provide artists and titles of songs?  I can't answer that question.


So, Dan Mason at CBS said,  "Effective immediately, CBS RADIO Contemporary, Rock, Urban and Country stations will increase the integration of title and artist information on new music releases in an effort to personalize, and drive sales of the product."  That sounds like a good idea and has been a good idea since at least 1976.  There may be people who are older than I am who would say that the idea has been a good one before that.


Final Note:  What's the best way to present artist and title information?  This is the only area where there are differences among listeners, and it is especially sensitive for age and format.  While some listeners like artist/title after each song, others like a backsell when all artists and titles in a music set are given at one time, and others like when a long music set is stopped in the middle to give the first few songs of the set followed by a backsell giving the last few songs in the set.  In most cases, listeners don't like the presell approach.


Because there is no universally liked approach to providing artist and title information, each radio station should find out which approach their listeners prefer.  This is extremely important.

Playlist Controversy

Uh, oh. I believe I have opened a colossal can of worms; I have stirred a sleeping giant; I have instigated the ire of radio peeps. How? I asked for information to support the belief that a short playlist (the cause) increases Arbitron numbers (the effect).

I have found that it is heresy to question a long-held "principle" of radio programming ("String up the varmint!"). I have pushed over a sacred radio cow—probably a sleeping one. Blaspheme! I know that it is probably best to allow myths and urban legends to continue without question, but I can’t. I just can’t. Allow me to fill you in . . .

In the past several days, I answered questions relating to why radio stations have such short playlists. Since my answers appeared, I have received 37 responses that document the playlist-Arbitron relationship. The only problem is that none of responses included proof, just opinions. But opinions don’t cut the mustard when we’re discussing a significant decision about how to program a radio station. Opinions are fine, but that’s all they are.

If we were discussing a decision in the hard sciences, how many research scientists do you think would select an opinion over conclusive evidence to prove that something does or does not exist Or that something does or does not work? Let’s say that we’re discussing a cure for AIDS (or any other disease). Do you think the medical community would accept a new drug on the basis of opinion? "Yes, I have administered this drug to several patients and there seems to be a decrease in the severity of AIDS symptoms." "Rock on, we have a cure!"

What would happen is that someone from the crowd would yell, "Show us some proof! Show us the results of a controlled experiment that rules out all other confounding and intervening variables. We want proof!"

"Uh . . . don’t have any. You’ll just have to take my word for it. I have been in the business for a long time, I have worked in many markets, and the medical people in New York think it’s good." Can you guess how long a medical researcher would last in the field by offering only opinions about the success of a drug or therapy? About 2 seconds.

But I am expected to do just that with the short playlist relationship. (I know that we’re not dealing with life and death here, but the principle of proof is the same.) I am asked to accept the idea that a short playlist is 100% positively correlated to higher Arbitron numbers. Sorry, not gonna do it . . . wouldn’t be prudent. All I ask for is, "Where’s the beef? Where is the hard evidence that verifies the link between playlist length and Arbitron?"

If there is a study somewhere, bring it on. If the proof is there, I will be happy to say, "I’m wrong." I’ll even write it on the blackboard 100 times. However, let me back up just a bit so that you know where I’m coming from. Go get a beer, an iced tea, or something.

As human beings, we learn things in four ways: Authority, Tenacity, Intuition, and Science. Let me explain:

Authority. We learn from people whom we consider to be leaders. These people include teachers, parents, bosses, mentors, or anyone else we perceive as valid, credible, believable, and trustworthy. For example, we might believe that a short playlist equates to higher Arbitron numbers because a consultant said so, or a well-known PD said so, or anyone else you admire and trust said so.

Tenacity. We learn things because "that’s the way it has always been." We believe that a short playlist means higher ratings because the relationship has always been accepted. It is true because it has always been true. Tenacity doesn’t mean it’s correct. For example, consider this . . . for decades it had always been true that ulcers are caused by excess acid or stress or a combination of the two. However, we now know this "fact" is not true. A scientist in Australia didn’t "buy" the fact and proved that bacteria cause ulcers and they are cured with antibiotics.

Intuition. We learn things because something "seems to be" right or wrong. We say that people like to hear their favorite songs over and over again. Therefore, it seems like they could hear the favorites more often with a short playlist.

Science. Learning from the scientific method involves testing, not merely accepting things. Scientific knowledge develops from systematic, controlled experiments that are designed to rule out all competing intervening or confounding variables. The scientific method verifies or refutes a cause-and-effect relationship—it doesn’t accept opinions. In addition, the scientific method is self-correcting, encourages independent verification, and always accepts that idea that was is accepted a true now may be found to be false later on when additional experiments are conducted.

We usually don’t conduct tests with information we learn from authority, tenacity, or intuition. But sometimes we do. In addition, we all have been testing things since we were born. Do you remember the first time an adult (usually parent or guardian) said, "Don’t touch the stove . . . it’s hot and you will burn yourself." How long after your heard that information did you touch the stove to prove that you would get burned? My guess is a few minutes or maybe you waited until the next time the stove was turned on.

We test things every day and will do so until we take our last breath. What is the best way to get to the office? How fast should I walk across the street so a car doesn’t hit me? Where is the best place to eat dinner? How many beers can I drink before I get sick? Well, forget that one.

Anyway, so why is it that we accept some authority, tenacity, or intuition as the truth, but not everything? Why did we test a hot stove, but not test if a short playlist equates to higher Arbitron ratings? As Mr. Spock would say, "That don’t be right." (Or something close to that.)

I have been trained in the scientific method. When it comes to significant decisions, I don’t accept opinions or "it seems like" statements. I simply ask—and it’s just a simple request—for proof. In reference to the playlist/Arbitron relationship, the only thing I ask is for someone to show me the results of a carefully controlled scientific experiment that rules out all other intervening or confounding variables. I want a scientific test of the hypothesis, not opinions.

I’m not trying to be a hard-head, a butt-head, a Deadhead or a Phishhead. I’m just asking for data.

And you should expect the same thing. You should expect the same from anyone you deal with during the course of your job. Folk tales, urban legends, unfounded opinions or interpretations, and other nonsense mumbo-jumbo should not be the type of information you or your colleagues use to make decisions about a multi-million dollar property.

My basis for this long discussion, quite frankly, is that I’m tired of all the myths that run rampant in the industry—myths that are passed on from one person to another as the Rites of Passage into radio management. Radio (for most people) is not a damn hobby and we shouldn’t accept folk tales and cocktail party chitchat to replace hard, scientific information. Scientists in the hard sciences would fall over laughing with some of the information that radio people consider sacred. Where is the proof? Show me the data!

You know, if you send me a Harley-Davidson gift certificate, your Arbitron numbers will go up. That seems logical, I heard it from a friend I trust, and it has always worked in the past.

Playlist Length

First, I love your column. Thanks for taking the time to share your knowledge on not only research and radio, but on some of the zany questions that are asked too. I've seen a subject come up a few times here on the length of a playlist and how it does (or doesn't) affect a station's ratings. As a PD who has had to defend the "shorter playlist" to his air staff and yes, even to management, I just have to say I think the semantics of the situation are keeping the real subject from being discussed. As a researcher, you know that it's not the LENGTH of the playlist, but the QUALITY. Playing fewer songs for higher ratings only works if they are the right songs (as determined by an auditorium test with a proper screener). It just so happens that most of the time utilizing these data results in a shorter playlist. A lot of DJs and other radio people who don't understand the importance of research confuse the shorter more repetitious playlist with the success of the radio station. The reason the playlist becomes shorter of course is easy—all of the "junk" (I appreciate that you run a G-rated column) has been trimmed, leaving the station with a streamlined, cream of the crop music library. This in turn increases the audience time spent listening, which hopefully translates into higher Arbitron ratings. - Jim


Jim: Thanks for the comments about the column. I agree with everything you say because you take the correct approach to developing your playlist—you ask the listeners to rate your music. My complaint relates to the approach where management says something like, "Your playlist will be 300 songs—that’s it." However, what happens if your music tests uncover 336 good songs? Or 371? The management "rule" stifles the product.


My complaint is setting an arbitrary playlist length based on faulty logic—that the shorter playlist will guarantee higher Arbitron numbers. The cause and effect are not established and the rule makes no sense. I have no argument if a playlist is short (whatever that means) if the playlist is based on a systematic scientific approach. I do have an argument if a playlist is short because of some pre-established random number management sets.


If you conduct scientifically sound music tests and find that you only have 97 good songs, then that’s what you should play—although if that’s all you have, the format you’re in may be too narrow. For example, I would guess that a music test of Christmas songs would produce about 25 "good" songs. This would be a good indication that Christmas songs might not be the right format for your radio station.


The number of songs in a playlist should "fall where they may," not be based on unsubstantiated "facts"—personal feelings.

Playlist—Too Many Songs?

Hi Doc:  We currently have more than 1,000 songs in rotation (AAA format).  Do you believe this is too many?  Do you know of any solid literature on the subject of music rotation, number of categories, etc.?  Thanks so much. - Greg


Greg:  Hi to you too.  I have been involved in radio research for about 30 years and have always found the discussions about the number of songs in a radio station's playlist to be somewhat amusing.  I'll explain that shortly.  On to your question . . .


I haven't seen any "solid literature" on the topic of how many songs to include in a playlist.  (I'm assuming that "solid literature" means information based on research that follows the tenets (rules) of scientific research.)  There are some articles available that discuss the question, but most appear to be based on hearsay, urban legend, myths, "it seems like" comments, and personal opinion (none of which hold water).


The problem with developing a reliable and valid formula to determine the best number of songs in a playlist is that there are so many variables that need to be considered.  I'm sure it would be possible to develop a "playlist formula," but I don't know anyone, including me, who wants to spend the time to do it.  (Although I'm sure someone might be willing to take a stab at it if someone, or some company, would provide the funds to pay for the time it would take to do so.)


However, without a specific formula to determine the number of songs to include, there are a few variables that can help compute the best possible guess.  Some of the things I would like to know include, but are not limited to . . .

  1. Your radio station's cume.

  2. Your radio station's TSL (Time Spent Listening).

  3. Your radio station's turnover.

  4. Your radio station's Cume-to-Fan Conversion.

  5. The results of your music test to see how all of the songs test.  How many are below average?  What are the "unfamiliarity" percentages?  What is the range of scores for all the songs (with specific attention paid to the standard deviation for the entire test).

  6. What do the listeners say about the large playlist?

  7. How many listeners complain that you play too many songs they don't like?

  8. How many listeners complain that they don't hear their favorite songs often enough?

Now, in the first paragraph, I said that I found the number of songs in a playlist argument somewhat amusing.  Let me explain.  If you look carefully at the items in the list of things I'd like to know, you should see an underlying theme, or philosophy.  If you don't see it, this is what it is: Quality, not quantity.  In other words, I think the correct way to develop a playlist is to include only good songs that fit the radio station's format, not a specific number of songs.  If a PD (or whomever) identifies 336, 521, or 1,044 good songs that fit the format, then those songs should define the playlist.  Song quality, not quantity, should be the sole defining criterion for playlist development.


Your listeners will tell you how many songs to include in your playlist—why play anything they don't want to hear, or play things you (or the PD) think they want to hear?  That makes no sense.  To get an indication of how things are going, look at your cume, TSL, cume-to-fan conversion, and turnover.  If the cume is good, but TSL is low, cume-to-fan conversion is low, and turnover is high, then you may have a problem, which should be investigated further before any decisions are made (There may be other reasons for a low TSL, a low cume-to-fan conversion, and high turnover, so don't automatically assume that these items indicate a playlist that has too many songs.)


The short playlist (whatever "short" means) dates back to the Top 40 approach of the 1950s and 1960s when radio programmers thought that playing a short list of popular songs meant that listeners could hear their favorite songs frequently.  While many of these radio stations were successful, there is no proof that the short playlist was the cause for the radio station's success (the effect).  For example, I can remember when the Top 40 stations first emerged in the Chicago area and they were wildly successful.  Was it due to a short playlist?  In my opinion, the answer is, "no."  Instead, I can remember that the new Top 40 stations—for the first time—played music for younger people (people under the age of 120), and differed markedly from all the other radio stations playing Big Band, Frank Sinatra, Swing, and Polish/Czech/German marching music that emphasized the art of tuba playing.


What happened was that the short playlist was assumed to be the cause of the Top 40 radio stations' success, and this assumption has carried forward to today—without any legitimate scientific research to support the idea/myth/urban legend.


In summary, and to repeat . . . the correct number of songs to include in a playlist is the number of good songs that fit the radio station's format (as defined by the radio station's management).  If someone tells you, for example, that you should only play 350 songs, ask them this—"Why?"  If the person says something like, "Radio stations with short playlists perform better in Arbitron," then say, "Prove it."  (They won't be able to.)


Conclusion:  Success in radio is a 3-step process: (1) Find out what the listeners want; (2) Give it to them; and (3) Tell them that you gave it to them.  If the songs in your 1,000+ playlist are good songs and fit your AAA format, and your cume, TSL, cume-to-fan conversion and turnover are good, then your playlist is on target.  If there are problems with one of the important variables, then it's time to take a close look at the playlist.


Another thing . . . Radio stations that use slogans such as, "We have the longest playlist in town," or "We play the widest variety of music," that don't back up the slogan by playing good music, are essentially lying to the listeners and listeners will identify the lie very quickly.  Among the "long" playlist are often a bunch of garbage songs, and these songs merely force listeners to hit the radio buttons to go to another radio station.  Quality, not quantity.  Case closed.


And one final point:  If I were in the position to hire a PD, I would ask the person how many songs he/she plans to include in the radio station's playlist.  If the response was something like, "I think we should allow the listeners to determine the number of songs," I would offer the person a lifetime, no-cut contract.  (Off the soapbox.)

Click Here for Additional P Questions


A Arbitron B C D E F G H I J K-L M
Music-Callout Music-Auditorium N O P Q R S T U V W-X-Z Home

Roger D. Wimmer, Ph.D. - All Content ©2018 - Wimmer Research   All Rights Reserved