Oldies Playlist

What does research tell you about Oldies playlists? Is it definitive that listeners in all markets only want to hear a 300-song playlist? Are there differences between casual oldies fans and those who are more into oldies trivia and more open to a larger playlist. I can think of one oldies station that goes out of its way to carefully mix in some of the rare and scratchy cuts. I worked for an oldies station that wouldn't touch anything but the most highly tested 300. Interested in your comments. - Anonymous

Anon: From the research I have seen, the playlists for Oldies stations need to be longer than for other formats, but I'd like to address a few things.

  1. I have never heard about nor seen an objective, controlled, scientific study that proves that a playlist of 300 good songs (or any other number) is any better or worse than a playlist of any other number of good songs. The only thing that I am aware of is opinions, conjecture, and other "it seems like" statements. By the way, a ratings increase in Arbitron is not proof that a short playlist is better than a long playlist.

  2. I have seen a lot of research that shows that listeners tune away from radio stations because they get tired of hearing the same songs over and over again.

So, there are essentially two sides to this argument:


SIDE 1: Follow the Todd Storz/Gordon McClendon philosophy of playing a short list of the most popular songs. Notice that this doesn’t mean playing a list of the most popular songs. It means playing a "subset" of the list of popular songs.

The idea here is that when people listen to radio, they want to hear only the hits—over and over again. I can’t even estimate the number of times I have heard people claim that they "turned a station around" because they took the playlist from 700-1000 songs to 300, or 250, or 200. The argument is that the limited playlist gives the listeners what they want to hear—the most popular songs. These people then point to the higher Arbitron ratings and say, "See . . . the shortened playlist did it!"

My problem is that I have never seen an objective, controlled, and independent scientific study that proved that. I have heard only opinions. And the opinion is usually tied to what some other radio station in some other market is doing. For example, "WXXX in Borneo plays 300 songs and they’re successful. They must know how to do things right. We did what they did and we went up in the ratings." (Have you ever noticed how many radio people think that the radio people in other markets always have the "right" answers?)

Anyway, a radio station’s ratings are the result of the interaction of many, many variables—not just the number of songs in the playlist. While a station’s ratings may go up after the playlist is reduced to, say, 300 songs, the shortened playlist may not be the sole cause of the ratings increase. The increase could have been influenced by a better overall presentation, a great marketing campaign, improved performance by the jocks, a terrible performance by the competitors in the market, or one of several dozen other variables (or combination).


Or, it could be that many of the songs in the original playlist of, say, 800 songs, were bad songs. It could be that cutting the playlist to 300 eliminated these bad songs. Would the affect have been the same if the playlist were cut down to 400, or 450, or 511? I don’t know. I have never seen that tested before. The usual decision is something like, "Test the library and pick the best 300-350 songs. Why that number? I don’t know. It falls into the "seems like" category.

In summary, the SIDE 1 approach is based on opinions. If there is scientific evidence somewhere that shows a direct cause-effect relationship between song list length (good songs) and Arbitron ratings, please let me know.


SIDE 2: This approach also subscribes to the philosophy of giving listeners what they want, but in particular, it takes into consideration the criticism by listeners who say that they get tired of listening to the same songs over and over again.

As you can see, both sides follow the idea of giving the listeners what they want. That’s good because it’s the correct approach to take. But there is a problem. It is true that the listeners want to hear the hits, but it is not true (based on research, not opinion) that all the listeners want to hear the same hits every day.

OK, let’s step back just a moment. In nearly every perceptual study I have ever conducted or seen, a major complaint by listeners about radio stations is that they play the same songs over and over again. This is further complicated by the fact that 2, 3, 4 or more stations in a market may be playing the same songs. This means that the listeners could conceivably be exposed to the same song several times in one day.

Now most people say that they can’t really worry about what the other radio stations are playing and it’s the "fault of the system" that a listener may be exposed to the same songs on more than one station. I understand that because it’s true. So the best thing to do is worry only about what you’re playing since you can’t control what the competitors play.

And this opens another problem. The goal of any radio station is to have a lot of listeners (Cume) who listen for a long time (TSL). The way you do this is to give the listeners what they want. But listeners don’t listen to only one music radio station. Why? The listeners say that they usually tune to another station because their favorite station plays a song they don’t like or a song that they "just" heard. What happened to the listeners who want to hear the same songs over and over again? I don’t know.

What I do know is that the radio stations that have huge playlists usually include a lot of songs that the listeners say are bad songs. So I’m not sure if the larger playlist is the problem or if the problem is that the large playlist includes a lot of crap. I tend to go with the second opinion based on what I have heard listeners say.

Oldies Playlist - Universe of Songs

Hi Doc:  How many songs do you think an Oldies radio station should have on its playlist—400-500 songs? - Stuart


Stuart:  As with any music format, I don't think there is any predetermined number of songs that should be on the playlist.  This information has to come from the listeners.  If research shows that a large percentage of listeners say that the radio station repeats songs too often, then it's time to expand the list.  However, it's important to balance this with questions like, "Plays your favorite songs often enough."


Understand?  If you ask the correct questions, a radio station's listeners will help define the number of songs on the station's playlist.

Oldies Promotions

Doc:  I sent this question earlier this week with no response.  You have any solid promotions that have done well on Oldies formatted stations?  Know where I can go to get some info? - The Great One


TGO:  Oops.  Sorry about that.  My grandmother died.  No, wait, I had a flat tire.  Uh, no, actually, the solar flare erased my hard drive.  OK, so none of those is true.  I accidentally erased your first note, and since questions arrive anonymously, I have no way to contact you, or anyone else, unless you include an email address.  My fault.  I apologize for my error.  On to your question…


Here are four searches for you.  Not all of the references are relevant, but you should find a few good ideas.  If not, let me know…


Oldies One,  Oldies Two, Oldies Three, and Oldies Four.  If those don’t work, try this search on Oldies listeners lifestyles.  You may get an idea for a unique promotion.

Oldies Radio - New Formats and Demographic Groups

Congrats on the new textbook!  Something in one of your responses caught my eye.  You mentioned that new formats are designed for large demographic groups, such as baby boomers.  While this group is very large, isn’t the consensus among ad agency types that these people are rapidly approaching or are over the magic 55th birthday, therefore are so stuck in their ways they’ll never try anything new and thus do not need to be programmed or advertised to?  Several Oldies stations with good ratings have gone by the wayside, and many more are distancing themselves from the 60s.  Interested in your thoughts. - Anonymous


Anon:  Thanks for the comment about the textbook.  It’s a process we go through every three years and it’s always nice to send the last chapter to the editor.


On to your question…I anticipated that someone might bring up the baby boomer aspect of the answer, so you came through.


It is interesting to me that ad agencies (mostly) come up with sweeping generalizations such as the one you mentioned—that older people (around 55 years old) are so stuck in their ways that they will never try anything new…etc.  This is antiquated drivel and falls in the same categories as “it seems like,” urban legends, tarot card reading, and more.


Having intimate knowledge of this older group of people, I can flat out say that all of the old urban legends or perceptions of “older” people are way off.  Not only do all of my close friends in this age group defy these antiquated perceptions, but so does research in all types of markets across the country.


Most perceptions of the “older” group of consumers are held by ad agency people who know as much about demographics as you and I know about searching for neutrinos in a bathtub.  Ad agency people are notorious for living and working with their heads in the sand…or some other place.


You say that, “Several Oldies stations with good ratings have gone by the wayside, and many more are distancing themselves from the 60s.”  In order to answer that, I need some specifics.  I’m assuming that the “Several Oldies stations with good ratings have gone by the wayside,” have done so due to advertising sales, but I don’t want to guess.


What I can say, though, is that many of the 60s-based Oldies radio stations I have heard recently sound as though they should go by the wayside.  I’m not sure what the problem is, but it seems to me that some of the programmers (or someone) believes that anyone over the age of 50 is a blob of rotting meat and the programming matches this perception.


I believe that if a radio station is truly interested in reaching an older audience…and there are loads of these people wandering around…the station would pursue an active research process to find out what these people want to hear on the radio.


I have been involved in mass media for more than 20 years and it has been interesting for me to see how things change as I grow older.  When a person is young, he/she tends to ignore older people because there are other things to worry about—old age is for someone else, not me.  However, I would wager my entire $10 savings account that if you asked 100 people over the age of 50 if they feel old, decrepit, useless, set in their ways, and all the other stuff, the vast majority would say, “No, I feel and think the same way I did when I was 25 or 30 or 35.”


Someone or some company is going to catch on to understanding the older audience, and when they do, it will be interesting to watch how other people’s attitudes change.


By the way, I think it’s a mistake for Oldies radio stations to distance themselves from 60s music (if they are).  I also think it’s a mistake for Oldies radio stations to avoid finding out what their audience wants.  (One thing the audience doesn’t want is a radio station that plays only 60s music.)

Oldies Song Lyrics

Doc:  I enjoy listening to Oldies, and there is one song that drives me crazy because I don’t know what the lyrics are.  The song is “Runaround Sue,” by Dion & the Belmonts.  Early in the song, Dion says, “Oh, blank bland blank, and the smile on her face.”  Do you know what he sings where the “blanks” are?  Thanks for you help. - Anonymous


Anon:  The lyrics in this #1 song from 1961 (it’s that old?) have bothered people forever, but before I tell you the words, I need to say one thing.  I didn’t find the lyrics on the Internet (I didn’t look because most are incorrect), and I didn’t figure out the words out myself.  My youngest son, Buckwheat, is the person who told me what the words are—when he was about 7 years old.  Anyway, what Dion says is this:


Yea, oh, I miss her lips and the smile on her face.


Which is followed by…


The touch of her hand and this girl’s warm embrace.

So if you don’t want to cry like I do

Keep away from Runaround Sue


Now…if you listen to song very carefully, you’ll hear that Dion adds an “uh” sound to almost every word he sings.  In the verse you refer to, this is what he actually sings….


Yea, oh, I miss her lips and the smile on her face(uh).

The touch of her hand and this girl’s warm embrace(uh).

So if you don’t want to cry like I do(uh)

(uh)Keep away from (uh)Runaround Sue(uh).


There ya go(uh).

Oldies Songs Questions

Doc:  I have questions about two Oldies’ songs.  I hope you can help.


1.  “Peppermint Twist.”  Near the beginning of this song, the lyrics are…“Round and round and up and down, And a one two three kick, one two three ???”  What is the word after “two”—is it “go” or what?  I don’t trust the lyrics I might find on the Internet.


2.  “I Can’t Turn You Loose.”  This song by Otis Redding has been recorded by several artists, but there is one version that is, in my opinion, the absolute best.  But I can’t think of the name of the artist or band that recorded it.  I went to college in the late 60s in the Midwest and heard the song frequently.  It’s a rock version and I think the lead singer had white hair or something. - Anonymous


Anon:  Well, let’s see what I can do here.

  1. Peppermint Twist was a #1 song in 1962.  The word you’re missing is “jump.”  In fact, you can see Joey Dee perform the song (and jump) on the PBS series, “Rock, Rhythm and Doo Wop.”  By the way, on Joey Dee’s website, it says that their song “Shimmy Baby” led the group to develop the “1-2-3 kick, 1-2-3 jump” routine that later became the “Peppermint Twist.”  Joey (real name: Joseph DiNicola) was born in 1940, so he doesn’t jump very high when he sings the song now.  And one more thing…some people don’t know that Joe Pesci (“I’m funny how, I’m mean like I’m a clown…”) once played guitar with Joey Dee and the Starliters.

  2.  The white hair was the clue I needed.  I’m sure you’re talking about the version done by Wayne Cochran and the C.C. Riders.  I would bet almost anything that is who you are talking about.  I remember seeing them perform many times at bars in Wisconsin and Northern Illinois.  Guess what?  Wayne Cochran has a website.  It’s a trip.  Click here to go there: Wayne’s website.  If you go to the site, you’ll find out that Wayne is now a minister.

Once Upon a Time

There is a song I used to sing in high school choir...I am pretty sure it was from an old musical or something of that sort...the lyrics I know go something like this:


Once Upon A Time

A Girl With Moonlight In Her Eyes

Put Her Hand In Mine

And Said She Loved Me So...

But That Was Once Upon A Time...Very Long Ago


Once Upon a Hill,

We Sat Beneath A Willow Tree

Counting All The Stars

And Waiting For The Dawn

But That Was Once Upon A Time

Now That Tree Is Gone


How The Breeze ruffled through her hair

How We Always laughed as though tomorrow wasn't there


We Were young, and didn't have a care


Where Did It Go


Quite a bit to work with, but I’ve gone to Google and searched under lyrics, musicals, choir, etc…so many things.  I have even called up gay friends and such.  It’s driving me crazy.  I need to know the name of the song and where it came from.  Can you help? - J


J:  You may have searched the Internet for this song, but you didn’t search the correct way.  Let’s do a little educational thing here…


I found your answer in about 5 seconds.  I looked at the lyrics you wrote and selected a unique set of words that probably aren’t used very often.  I selected, “We sat beneath a willow tree.”  I entered those words in Google with quote marks, like this: “We sat beneath a willow tree”.


The search produces 58 sources, most of which answer your question.  The lyrics are from a song called, “Once Upon a Time” (lyrics by Lee Adams; music by Charles Strouse), and included in the Broadway musical, “All American” by Ray Bolger and Eileen Herlie.


Click here to see the search: Name that Tune.


I hope that stops you from going crazy.

Online Research

I have received many questions about using the Internet for music testing or perceptual research.  I think the best thing is to address all these questions at one time.  Here is a sample of some of the questions I have received:

Question One: Our station is about to enter into a relationship with a research company for online “callout” research.  The company says that the research is more representative and “valid” than regular callout research because the sample size will be so large it will overcome any discrepancies.  Is this true?  Before we signed up, I wanted your thoughts on this method of research.


Question Two: We use one of the largest (they claim) Internet music research services.  From my limited experience with a secular Top 40 station many years ago, I was under the impression that callout was only useful for established songs, but this research company says we can test currents before we add them and be “assured” that the songs we add will be hits to our target audience.  However, I don’t understand how much people can really tell about a song they hear for the first time from a hook, even a long 20-second hook. The company boasts that we can make all, final, music decisions like this.  How valid and reliable is testing previously unexposed new songs online?


Question Three: My PD uses music research where listeners sign up to take music tests online, as well as answer some perceptual questions (Sometimes called an “Advisory Board” or “Listener Panel.”)  He takes these data VERY seriously.  My opinion is that this research method is relatively worthless because: (1) It’s not a random sample; and (2) It polls only active, tech-savvy listeners.


My PD’s take is that it allows him to super-serve the P1s.  My take is that by super-serving the active P1s, you’re allowing a very, very small percentage of the listenership determine the direction of the radio station—kind of like polling request-line callers and making decisions based upon what they say.  How much weight would you say this type of research should carry?


Question Four.  We don’t have much of a budget to do research, so one idea I had was to let listeners go to our website and rate what they hear coming from our station.   Do you think it’s a good idea to include this on a radio station’s homepage, or should we rather not mess with this?


Here are my answers to all the questions . . .


The radio community has quickly accepted online (Internet) research for two main reasons: cost and speed.  While PDs and others see the online research as a way to gather information cheaply and quickly, most don’t question if the data are valid and reliable.  In addition, in many cases, the information is being gathered by people who don’t have a research background.


Much of my nearly 30 years of experience in research has been spent explaining what research can and can’t do. One recurring theme I hear often is that conducting research is easy—anyone can conduct research.  In addition, I have found that many people accept a research methodology or product simply because it’s new (New = Good). These folks don’t take the time to find out if the new methodology/product is right (valid and reliable).


What many people don’t understand is that research is a complicated process that involves an understanding of many different things—sampling, questionnaire/instrument design, data collection, data analysis, statistics, and interpretation.  All of these areas include many strange-sounding words and terms and because of this, it’s easy for someone to sell research to someone who has no research background. (I believe it’s commonly known as “Pulling the wool over someone’s eyes.”)


For too many years, non-researchers have developed methods that look and sound right, but are patently wrong.  These salespeople (and I’m talking about some of the “big” names who have the word “research” in their company’s name) sell their products to people who have no research experience.  The salespeople use a few research terms to make things sound complicated and correct, and unsuspecting PDs and GMs take the bait.  They unwaveringly accept the words of these non-researcher salespeople as legitimate; they assume what they’re buying is scientifically correct.  However, in many cases, the products are not correct.  In many cases, programming consultants or former PDs develop the products, not researchers or statisticians.  That, as some people say, “Don’t be right.”


The problem with radio research, specifically new methodologies like online research, is that not enough people ask questions about the research they believe in, buy, and use.  Not enough people ask the salespeople (usually consultants or former PDs) about their research background.  Not enough people ask about their experience/knowledge in questionnaire design, sampling error, statistics, or the reliability and validity of the product(s).  The list about what is not asked goes on forever.  Instead of asking questions related to the correctness of the products and and background of the salespeople, many broadcasters simple ask, “How much does it cost?” and “How quickly can I get the information?”  (What happened to simple questions such as, “Do you have a research background?” and “Are the data reliable and valid?”)


OK, let’s move on to some of the issues in online research…



In research, valid is defined as, “testing what is intended to be tested.”  For example, determining a person’s favorite radio station by asking if the person likes chocolate is not a valid measurement.  With music testing, the validity question is very simple—“Does the method really test the respondents’ likes, dislikes, or perceptions of songs (or parts of songs) they hear?”


Why is this important?  Well, regardless of the music testing methodology (callout, auditorium, online), it’s important to make sure that the procedure is scientifically correct.  This includes the rating scale used by the respondents.  What type of scale is used?  How many points are used—3, 5, 7, 10, or something else?  Do respondents understand the scale?  Does the scale actually measure a person’s feeling toward a song?  (Note: Using too few points may hide the respondents’ perceptions about a song.  In research, this is known as “factor fusion”—using too few rating points “squeezes” the data and hides fine distinctions among the ratings.)


Just because a research methodology is sold/pushed by someone does not make it valid.  Don’t blindly accept a new methodology (or any methodology) without questioning the validity of the method.



In research, reliable is defined as, “consistently testing the same thing.”  A measurement instrument is unreliable if it produces different results each time it’s used with a sample selected in exactly the same way.  If the music test data “bounce around”—a song tests well in one test and poorly in another—the measurement instrument may not be valid.  (The bouncing scores may also be due to other things, such as bad samples, or samples selected in different ways, or simply because of naturally occurring sampling error.)



As compared to typical telephone perceptual research or telephone callout, online research is considerably less expensive.  This is a tremendous asset for online research.


Random Sample

In most online research situations, radio stations either have their music test on their website (addressed later) or an outside vendor uses a sample of volunteer respondents who rate the music.  These are not random samples.  This is a big point that must be discussed, but first, I need to address the concept of a random sample.


A random sample is a sample where everyone in the population/universe under study has an equal chance of being selected.  In reality, there are no truly random samples in behavioral research because the respondents volunteer to participate in any study.  To be truly random, we would need to force each randomly selected person to participate in our study.  We obviously can’t do that, so we must hope that each randomly selected person will volunteer.  The problem is that not all of the selected respondents volunteer to participate.


If only one of the randomly selected respondents refuses to participate in a study, the sample no longer matches the definition of a random sample and the sample becomes volunteer sample.  In other words, there are no truly random samples used in radio research.  In fact, there has never been a true random sample used in radio research—every sample for every radio study ever done has used a volunteer sample.


OK, it’s a given that we can never expect a truly random sample in radio research.  But, there are things that can be done so that the sample will be as random as possible.  First, you should never accept pure volunteers in your sample—people who call in, write in, or in any other way ask to be involved in your study unless these folks pass the screener designed for the research project.


But some leeway is often accepted in sampling, particularly in reference to radio station databases.  Many radio stations have good databases of people and it makes sense to use them.  However, it is not acceptable to blindly accept the respondents in the database for a research project.  These people must pass the screener designed for the study.


Advisory Boards/Advisory Panels

In the past few years, many radio stations have developed a listener Advisory Board or a Listener Panel to gather information about what the listeners like and don’t like about the radio station.  There is nothing wrong with using an Advisory Board or Listener Panel to gather research information if:


  1. The board or panel is selected randomly.

  2. The board or panel includes and least 100 respondents.

  3. Twenty-five percent (25%) of the board or panel is replaced each time a study (or any data collection procedure) is conducted.  That is, if the group is convened (or called) once each quarter, 100 respondents take part in the first quarter project.  In the second quarter, 75 of the original sample are reused and 25 new respondents are added.  When next year’s first quarter project comes around, the entire original sample will have been replaced.

Replacing 25% of your sample for each study eliminates the problem of relying on only one group of people for your data.  (There is an option of using 100 respondents in a methodology called a “Panel Study,” but the headaches involved in this methodology are tremendous, such as respondents who drop out of the panel.  It’s easier to replace 25% of the sample for each project.)


Sample Size

A typical online or auditorium music test should include 75-100 respondents.  A typical perceptual study should include 400 respondents unless there is a desire to have lower sampling error.  The reason music tests can use fewer respondents is due to the nature of the measurement instrument.  Music tests use a methodology know as a “Repeated Measures Design,” which means that the respondents use the same rating scale repeatedly.  The repeated use of the same rating scale reduces measurement error and increase reliability.


Large Sample

Under no circumstances can a good sample be equated to sample size.  Size may be important in some areas, but when it comes to research, sample size alone does not guarantee that the sample is good.  This fact is very important because many music testing companies, particularly those that conduct online music tests, peddle their data as good (“more representative” “more reliable” “more valid”) because the sample size is large.


This is pure pseudoscience (garbage science/information) and is definite proof that the data peddler does not know research.  A sample of any size can be a bad.  For example, let’s say that a CHR radio station has music test data from 10,000 18-24 year-old respondents.  Is this a good sample?  Of course not, but according to the online music data peddlers, the sample is good because it’s large.


Anyone who claims that a large sample guarantees that the sample is reliable and valid subscribes to what is known as “The Law of Large Numbers”—that the research is good because it uses a large number of respondents.  A sample of 100 or 10,000 can be good or bad.  There is absolutely no relationship between sample size and sample quality. None. Nada. Zip. Zilch. Goose egg. Zero.  This cannot be debated.


So that there is no misunderstanding, I will restate this point: A large sample does not, in any way, shape, or form, guarantee that the sample is good.  A large sample, say 5,000 respondents, may be as bad as a sample of 100 or 250.  Sample size alone means nothing.


Super-Serving P1s

Although several questions about online music research address the idea of super-serving P1s (radio station fans or the radio station a person listens to most often), this practice doesn’t relate only to online music research.  Super-serving a radio station’s P1s emerges in all types of radio research.  Why?  Because the thought is that super-serving these people will ensure that they get what they want from the radio station since the radio station is their favorite.  But there is a problem here.


Too many people take the philosophy of super-serving P1s too far.  The problem is that if a radio station continually collects information only from P1s, the radio station will eventually have only a handful of listeners.  Why?  Because P1s have much in common and do not exhibit a great deal of variance (such as the variance—differences—exhibited by a radio station’s cume).  In addition to selecting the same radio station as their favorite, P1 listeners tend to be close in age, socio-economic status, and other things.  This means they don’t vary too much in their likes and dislikes.


A radio station that continually limits its research only to P1s will continue to restrict the variance of its listeners and will eventually program only to a small group of people.  The small group will eventually depart from the general population and the programming will be acceptable to only a small group of listeners.  A radio station that continually conducts research only with its P1s is building its own coffin.


Tech savvy people

Some people criticize online research because only “tech-savvy” respondents will participate.  I’m not as concerned about this point as I am with another areas of online research (discussed in a moment).  Most information shows that people who use computers cross the spectrum of demographics and socio-economic status.  Some tech-savvy people in the general population may be more likely to answer online research because they do many things on the computer, but this does not mean that their opinions about radio and music are different from the non-tech savvy people.


Among all the tech-savvy people are regular radio listeners.  If this is a concern, and you use a vendor for your online research, ask the company to verify that the samples used for their research represent “average” listeners to your radio station.


Types of Songs

Since typical auditorium and callout research use hooks (about 5 seconds of a song), the methods are designed to test only familiar songs.  New songs cannot be tested using hooks either in an auditorium setting, callout, or online.  The only way to test new songs is to play the entire song.  There is no debate about this because testing only a short segment of a new song does not give the song a fair test.  Testing new songs via hooks or even longer segments would be the same as asking respondents what they think about a new TV show after seeing only 5 minutes of the program.

To repeat…Testing new music by allowing respondents to hear anything less than the entire song is an invalid way to test new music.  Anyone who suggests that the procedure is valid is suggesting (or selling) pseudoscience (as mentioned, that’s garbage science or garbage research).

The Big Problem

The main problem with online research is a lack of control over the testing situation.  Keep in mind that control over the research situation is relative and can never be controlled 100%.  However, in auditorium and callout, you can be fairly sure about the identity of the respondents (male/female and age), and you know if the respondents are exposed to the hooks being tested.  These controls aren’t possible in online research.


Who answers the questions or rates the songs?  Male?  Female?  Young kids or older people?  Are the respondents “plants” from other radio stations (or other malicious individuals) who are trying to mess up the test?  There is no way to know.  This is a serious problem.  How can anyone rely on online research data if there is no check to determine who is answering the questions?


This “big problem” is just that—a major fault of current online research.  If you use online research for music testing or perceptual information and you don’t know who is answering your questions, then don’t be surprised at the consequences of your decisions.  Tarot cards are probably just as reliable.



Online research in all businesses has great potential, but we just don’t know enough about who is answering the questions and if the respondents are exposed to the information or material being tested.  Even with this significant problem, many people use online research to collect information they will use to make significant programming decisions.


I remember asking a group PD why he switched to 100% online research.  He said, “They [the research company] told me that the results are the same as a an auditorium music test and a telephone study.”  I asked him to see the data that compared the two methodologies.  I never received the data.  I never received the data because the research company doesn’t have it—the comparison data don’t exist. The PD took the word of the non-researchers at the “research” company.  Does this make sense?


Would the PD (or anyone else) accept a medical prognosis from an electrician who learned medicine because of so many visits to the doctor?  Would the PD (or anyone else) allow a bank teller to install a transmission in his vehicle because the teller had the job done so many times?  I think not.  So why do radio people (TV people are just as bad) accept the word of non-researchers when it comes to information to run their multi-million dollar properties?  Why?  I don’t get it.  (By the way, if a study exists that shows that online research and non-online research produce the same results, then bring it on!  Send it to me.  If I can replicate the findings, then I will change my mind.  That’s the advantage of following the scientific method—it’s self-correcting.


Merely saying, “Our auditorium music tests scores are close to my online music research scores” will not cut the mustard.  I want to see valid statistical tests that compare the two methods (the same thing with comparing online research with telephone perceptual studies).  Show me the data!  That is all I ask.



Online research can be done correctly. The Internet has opened the doors to a variety of new and exciting research collection methods.  However, from what I have seen, the research is not being conducted correctly and radio station people are using data that are, at best, questionable.


If you do decide to hire a vendor to collect information via the Internet, or if you conduct the research yourself, there is one requirement you must satisfy:

  1. Know your respondents.  I understand that 100% respondent verification isn’t possible in any type of research.  However, you must have some idea of who is answering your online research questions.  If you don’t, then you shouldn’t use the data.  When you have respondent verification (whatever amount you have), you can check the validity and reliability of your data with a few simple statistical tests.  If you don’t know how to conduct these tests, find a researcher or statistician to help you run things like a t-test, Z-score comparisons, correlations, and standard deviation comparisons. 

If you satisfy the first requirement, then here are a few other mandatory items:

  1. If you hire a company to conduct your research, make sure that the company has a researcher or statistician (minimum Master’s Degree in research or statistics; Ph.D. preferred) on staff, or at the very least, that the research method was designed and tested by a researcher or statistician.  If the company doesn’t have a researcher or statistician involved in the methodology, then don’t use the company.  If you conduct the research yourself, the same rule holds—either you have a Master’s or Ph.D. in research or statistics, or you hired one to develop and test your methodology.

  2. Make sure that the measurement scale uses at least 5 points.  Anything less than 5 points will create factor fusion—the small scale “crunches” the data into too few points and will not show enough variance.  Although a 7-point scale is good, I prefer to use a 10-point scale more music ratings and perceptual ratings.

  3. Before you begin to use your data for decision-making, you need to run several statistical tests to determine the validity and reliability of the data.  You don’t need to get carried away here—t-tests, z-score comparisons, correlations, and analysis of variance will do.

  4. Do not fall for the large sample scam.  A large sample does not guarantee that the sample is correct.  Stay away from any company that sells its sample as valid and reliable because the sample size is large.  Hire a company that sells its sample as valid and reliable because the respondents go through appropriate screeners and are verified.

  5. Do not use online research to test new songs unless the respondent hears the entire song.  We know if respondents hear an entire song in an auditorium setting.  We don’t know how to do that on the Internet.  If you figure out a way to verify that the respondents did actually hear the entire song, please let me know how you accomplished the task.

  6. Do not use pure volunteers for your research.  All respondents must pass through your screener.  This is true for all respondents, regardless of the source of the list (e.g. your radio station’s database).

  7. Rotate your sample (explained earlier).

  8. Use Z-scores to compare your online data to auditorium tests, callout, or one online test to another.  Do not compare raw scores.  This isn’t valid or reliable—it’s also wrong.

  9. Do not lend your data to another radio station, nor take data from another radio station and use the information for your radio station.  There is no guarantee that the data from one market will relate to any other market.  (If you know statistics, there is a way to determine if you can share data with other radio stations.)

As I mentioned, online research presents many great opportunities, but you must know what you’re doing.  If you use online research, do it right.


Click Here for Additional O Questions


A Arbitron B C D E F G H I J K-L M
Music-Callout Music-Auditorium N O P Q R S T U V W-X-Z Home

Roger D. Wimmer, Ph.D. - All Content ©2018 - Wimmer Research   All Rights Reserved