Request Lines

Hey Doc: Is there any research that shows how much phones come into play as an indicator of listenership?  For example, has it been verified that getting many calls can indicate that the listeners like the music at that time?  Furthermore, can requests indicate in ay way how the rest of the audience feels about a song?  Thanks! - Anonymous

 

Anon:  Any research on the topic?  I think the first time I saw research on this topic was in the early 1970s, so the answer to your question has been known for more than 30 years.  Here is the answer . . .

 

Listener phone calls for requests aren't relevant to anything and the information shouldn't be used for anything or interpreted as indicating anything.  There are three primary reasons for this:


Request Lines - Comment

Doc: I have a question regarding your answer on phone line research.  I understand the small sample size and obvious bias that comes into play when we're talking about requests and factoring that data into your spins and adds, etc.

 

However, when a song like Brad Paisley's "Ticks" comes out that is obviously a hot song and phones are abuzz versus a song like Phil Vassar's, "A Woman's Love," that tested relatively well, but got absolutely NO phone reaction.  Shouldn't that tell me something as a programmer?  Just eager to learn as much as I can.  Thanks! - Anonymous

 

Anon:  I understand what you're saying, but in my opinion, your comment falls into the "it seems like" category.  In other words, you said that are "abuzz" for a song like "A Woman's Love," so "it seems like" many listeners want to hear the song.  The problem is—What it the definition of abuzz?

 

"Abuzz" could mean a few dozen calls or a few hundred calls.  However, regardless of the number, you don't know anything about the callers.  You could have 10 people call in 10 times each and say, "Hey, we got 100 calls for the song."  See what I mean?

 

What should the number of calls "tell you?"  It should give you an indication that the song might be popular, but you shouldn't do anything with the information because you don't know if the information is reliable and valid.  To repeat—If you get a lot of calls (whatever "a lot " means) for a song, you shouldn't increase the song's rotation because you don't know if the data are accurate.

 

Finally, how many listeners do you have?  Even if you received 100 calls from 100 different people, what percentage is that of your total audience?  My guess is that it's a small percentage.  Why would you do anything with data from only a small percentage of your listeners?


Request Lines - Comment #2

In the question about request lines, you wrote, "Why would you do anything with data from only a small percentage of your listeners?"

 

My response is—Doesn't Arbitron do the same thing?  Only a small percentage makes up our ratings and decides our fate. - Nikki

 

Nikki:  Good comment.  You are correct in saying that Arbitron uses small samples.  However, the main difference between Arbitron and phone requests from listeners is that there is a known sampling error with Arbitron (probability sample), which allows you to compute the range of responses.  You can't do that with telephone calls from listeners since the sample isn't randomly selected (non-probability sample).


Research as Tool

How valid of a programming guideline is it for a radio station to strictly program its station according to the results of yearly auditorium tests and call-out research? Is this typical? In percentage form, how much would you rely on such research and how much would you rely on your programming expertise in making your music decisions? - Anonymous

 

Anon: In most cases, research results are intended to provide decision-makers with alternatives. Strictly following the results of an auditorium music test to develop a radio station's playlist is not the correct use of the data. It is important to look closely at music test results to determine why a song scored the way it did. A lower testing song, for example, may still be appropriate to play on the radio station.

 

You ask for a percentage for reliance on research data. If the research is conducted correctly, you should be able to rely on it 100%. I think what you asked is, "What percentage of decisions should be based on research and what percentage on expertise?" The answer to that is somewhere between 0 and 100%. It all depends on the situation, the research, and the question you're trying to answer.


Research - Borrowed

Roger: "It's not in the budget" for my Mainstream Rock Station to have Local Callout Research.  While I am working on getting that changed, I need something we can use now.  A large broadcast company owns my station and I have been told that I have access to other stations' research.  What kind of criteria do I need to consider when choosing those stations?  I'm sure format and ratings performance would be up there.  But, what about difference in market size, geographical location, and social and ethnic make up of the market?  I'm sure that there's something that I'm forgetting. 

 

What is your take on things?   My market number is between 100 and 125, and we are located in the deep south. - Anonymous

 

Anon:  Before I get to your question, I need to get on the soapbox for a moment in reference to your comment that research is "Not in the budget."  In my 30 years of conducting research in radio, this comment probably bewilders me more than anything else I have ever heard come from a manager's/owner's mouth.  These folks are willing to spend hundreds of thousands or even millions of dollars to buy a radio station, but yet they claim that there is no money in the budget to test/research the product.  I don't understand that and it's the same, to me, as saying there is no money in the budget for electricity.

 

An interesting thing to me is that I haven't heard the "no money in the budget for research" from non-media companies.  Oh, I'm sure there are some people out there who might say this, but my experience with non-media companies is that the managers/owners want to know what their customers think and want so they can give it to them.  In my experience, radio is the only business where there is a pervasive perception that research isn't necessary to run a successful operation.

 

Here is a typical scenario . . . A person or company invests a huge amount of money for a radio station, but doesn't conduct research to find out what the listeners want.  The person/company selects a format because they think it will be successful, a programming consultant (or someone else) suggested that the format would be good, because the format is a current fad (such as the Jack format), or for some other equally ridiculous reason.  The station goes on the air and the PD runs the station with blinders on.  The radio station doesn't generate an audience and bombs in Arbitron.  The person/company wonders what happened, but to protect themselves, they fire the PD and/or GM.  Next step?  Hire another PD and/or GM and go through the same process.

 

If you read radio trades or radio-related websites like All Access, you'll always see a variety of comments from industry "leaders" (or owners or company suits) complaining about the difficulties facing radio today—increased competition from local radio stations, satellite radio, iPods, and more.  These folks complain about increasing expenses and low profit margins.  Oh really?  Some people, including me, might respond to these peoples' complaints by saying something like, "DUH"?  And then continue by saying something like, "Hey, you invested tons of money for a radio station, but neglected to spend an infinitesimal amount in research to find out what the listeners want.  What do you expect?  Then after you fail, you blame the GM and/or PD, but the fault is totally yours because you failed to give the information the GM and PD need to provide a product the listeners want.  It might be best for you to sell the property and get involved in the fertilizer business because you already proved that you can produce the product."

 

OK, sorry, but the "no money in the budget for research" just frosts my shorts.  However, from my comments, I think you might be able to guess what I'm going to say about you "borrowing" research from other radio stations in your company.

 

My first thought is, "Gag me with a beaker."  The person or persons who told you that you have access to other stations' callout research should seriously consider changing careers to the aforementioned fertilizer business, because that's what the comment is—pure fertilizer.  Listeners aren't the same in every market.  (This isn't a new concept.  I think it goes back to the early 1940s.)  There is no guarantee that what is popular in one market will be popular in another, and using another radio station's research to make programming decisions in your market makes absolutely no sense.

 

If it did make sense, a company could conduct a research study (or music test) in Chicago, for example, and use the results in every other city in America.  Would that sound like a good business decision?  Heck no.  It would be programming suicide, and the suggestion for you to use other stations' research is just that—programming suicide.

 

Now, with all that said, I need to provide an alternative for you because that's what you asked for.  The only possible way I can think of for you to use other stations' research is to simultaneously conduct a callout for your radio station and another similar station in your group.  You could then analyze the results (there are many statistical methods available to do this) to see how similar or different they are.  If there is some similarity, you may be able to develop a "correction formula" to adjust the other station's scores and use them as an indications for your decisions.

 

Note 1:  I'm guessing that you might not know how to develop a "correction formula," and you might need to hire someone such as a local college professor to help.  The person will develop some type of weighted linear combination formula (as it's called) to adjust for the differences between the two markets.

 

Note 2:  If you can't conduct the comparison research study, then you may be "forced" to use another station's research.  If you do this, you should get a written agreement from your boss (or bosses) stating that you will not be held accountable for selecting the wrong music for your radio station since the results from the other radio station may not relate in any way to your radio station.


Research - Borrowed - Comment

Thinking about your comment on each market being different . . . markets are constantly gaining and losing people, so obviously research must constantly be done.

 

My questions is, and forgive me if this was answered with a math equation in the past (unfortunately math makes my eyes glaze over), how can you ever know if you're really getting a current cross-section?  Aren't you always running behind, and never knowing if you're surveying people headed out? - Anonymous

 

Anon:  Good question, and I have never answered this question directly—only in a cursory way.

 

You are correct in saying that population sizes (and people) change and, therefore, research (unless you're looking at a study done yesterday) actually presents information about what was, not what is.  That's why it's necessary to conduct research on a regular basis.

 

However, even research conducted yesterday (or even today) is not 100% accurate.  Behavioral research (research with human subjects) is never 100% accurate because human beings change, don't know how to explain themselves well, and make mistakes in their answers.  That's why it's necessary to interpret research results using sampling error.  For example, the approximate sampling error for a sample of 400 respondents is about 4.9% (95% level of confidence).  If 50% of the sample agree that your radio station is perfect, the "real" percentage is somewhere between 45.1% and 54.9%.

 

It's the same with Arbitron numbers, specifically ratings.  The ratings you see in an Arbitron book are not 100% accurate and must be interpreted with sampling error.  For example, if a radio station has a 5.0 12+ rating, the "real" rating is going to be within a range, maybe something like 4.0 to 6.0.  You can determine the Arbitron sampling error for your market ratings by looking at the "Arbitron Radio Reliability Tables" in the back of your Arbitron book (if you subscribe to Arbitron).


Research - Boss Doesn't Believe In It

Question #1. I have been trying to get my GM to do "call-out" or auditorium research. He is from the "old school" and says that research doesn't work and that I need to go by our "gut instinct." He says he has never seen any ratings from stations that prove that research works. Can you give me some examples from specific stations showing an increase in 12+ ratings due mainly to research?


Question #2. I did get some "gold list" research from a friend in another market about 200 miles away that I used before the start of the last ratings period. We did got up in ratings in ALL demographics. Do you think the ratings rise is a result for this "out of market" research or are markets so different (even if it is only 200 miles away) that there is no way that the ratings rise could be because of the research?


As you can tell, I am trying to make a case for my GM to budget for research. My GM must think that I have ESP and can just figure the music out by reading minds. - Anonymous


Anon: Answer #1: Your GM reminds me of the story I heard when Neil Armstrong took the first step on the moon. An Australian sheriff (or something) was asked what he thought about a man walking on the moon. After staring at the moon for several minutes, he said, "I don't believe it. I can't see anyone there."


Being from the "old school" as you say, your GM will not be surprised if the radio station isn't successful because that's "just the way it goes." The only thing I can say is that if he wants to run the radio station on gut feelings and hope for the best, then the best he can expect is what you and the other folks deliver by guessing. If you're wrong, you should not be fired.


Generally speaking, people who run successful businesses tend to want to find out what their customers or listeners want. Obviously, your GM doesn't care. That's his choice.


However, I wonder what your GM would do if he owned a pizza shop? Would he just make one type of pizza ("Take It or Leave It Pizza") and hope it was the kind people wanted, or would he take individual orders? Following his logic, he would have one kind of pizza—designed by the young people who worked in the store. ("Let's see, I think this family wants anchovies, so anchovies it is . . . ")


As for examples to give him. Tell him to randomly select an Arbitron book from the top 200 or so markets. The odds are close to 100% that the top ranked stations use research. Hey, tell him that your "gut feeling" is that your salary should be $500,000 a year.


Question #2: Research from the past 15+ years shows that research data from one market should not be used in another market. There are too many differences. However, it may be that you got lucky this time in that your two markets are somewhat similar and the process worked. However, you cannot say that your ratings increase was due only to the music test. Other variables may have accounted for the increase. For example, maybe your competitors sucked in that book. Who knows? But don't automatically assume that your music test was the SOLE contributing factor to your ratings increase—you don't know that.


Research Budget

I am the new GM of an AC station in the Midwest (market size 50-100). There is currently no budget for research and the station desperately needs it (I think) as we are #13 out of roughly 16 stations in the market. Are there any rules of thumb for budgeting for research? I would love to do the full range of research projects, e.g. groups, telephone studies, music tests, but I have to be able to sell the owners on doing research and justify the estimated expense. Help! - Anonymous

 

Anon: Congratulations on your new gig. Your thinking is correct. You need to find out what's going on with your radio station. Your position of #13 in 16 radio stations suggests that you aren't giving the listeners what they want. I think your "sale" to the owners is that you're not doing well now with guessing what the listeners want—it's time to find out.

 

There are no specific rules of thumb for research budgets. It all depends on what you need. However, here are a few prices ranges for you to consider. I'm sorry I can't be more specific, but in order to do that, I would need to know sample sizes, screeners, and the exact nature of each project. However…

 

Focus groups: $3,500-$5,000 each

Music test: $20,000-$30,000 for 400-600 songs and 80-100 respondents

Telephone perpetual study: $25,000-$35,000 depending on sample size and length

 

Research expenses can vary widely depending on what you do and which research company you use. I will say this: caveat emptor. Get a few bids from companies and you'll find out the range. Make sure that you give each company the same specifications so they are bidding on the same project.


Research Budget - 2

You discussed the costs of research in a previous question, but our radio station doesn’t have enough money to pay those costs.  Is there a way to reduce research costs so a station like ours can afford it? - Anonymous

 

Anon:  I have addressed your question before, but that’s OK.  I’m sure there are other people who wonder the same thing.

 

It’s no secret that research contracted with a vendor can be expensive.  But all things can be expensive, so research isn’t out of line.  If you want to cut costs, here are a few suggestions:

  1. Discuss your budget constraints with a few researchers.  Some may be willing to reduce their prices, or at least help figure out a way to get the information you need.  It might be that a less expensive methodology would still allow you to get your information.

  2. If you, or someone at your radio station, has research experience, you can reduce costs by doing some of the work yourself.  For example, you may be able to recruit a sample for a music test yourself.  You also may have the expertise to tabulate the data.

  3. If you live near a college or university, contact a professor in journalism, marketing, or even sociology.  They may be able to help design and conduct your study at a reduced cost.

I know that many people believe research is expensive.  I also know that many people believe doctors, dentists, and plumbers (and virtually every other profession) are expensive.  In all of these cases, you pay someone to do something you don’t know how to do.  These people know what they are doing, have gone through the process before, and don’t make as many errors as someone who doesn’t have such experience.  Sometimes you have to pay a higher cost for the expertise.

 

The only way to reduce research costs significantly is to do some (or all) of the work yourself.


Research - Can't Afford It

My GM says we can't afford research. We use the national call-out, requests, and gut-feeling to determine adds and rotations. In addition, we also look at large-market stations in our region that constantly perform well. Are we doing the right thing? What do you recommend for stations that "can't do research?" - Anonymous


Anon: This answer will probably ruffle a few feathers.


There are two basic approaches to running any business: (1) Find out what people want, give it to them, and tell them that they gave it to them; and (2) Guess. What I can say is that in the 20+ years of conducting research for both media and non-media companies, one common trait among all successful companies is that the management demonstrates an intense need to learn as much as possible about its own audience or customers. I have never seen the management of a successful company demonstrate an intense need to guess as much as possible about its own audience or listeners.


Your GM follows approach #2: Guess what your listeners want, give it to them, and hope for the best. (Probability suggests that you should be correct once in a while.) Your GM says that your station can't afford research, but my guess is that your GM is really saying that your radio station doesn't need research since we always find a way to pay for things we need.


Now, let's go to your comment about using (a) national call-out; (b) requests; (c) gut feelings; and, (d) looking at large-market stations for guidance. This is what you are getting: (a) song ratings from people who aren't your listeners; (b) suggestions from a small group of people who may or may not match your audience and/or who may or may not be your listeners (pranksters at work); (c) personal opinions that may or may not relate to your listeners; and (d) information from radio stations that have no relation at all to your radio station since they aren't you, you aren't them, and they are in other markets.


Your market is unique. Your radio station is unique. Your listeners are unique. Why would you then make decisions based on information that comes from other markets, other radio stations, and other listeners? I would be very nervous about being in charge of a multi-million dollar radio station and make decisions based on guessing. That gives me the chills. But, then again, I'm not your GM


Research is not intended to force answers on decision-makers, but rather provide them with a variety of options. That's where the creativity, skill, and talent of the GM, PD, and all the other Ms and Ps comes into play. You ask what stations can do if they can't afford research. Hmmm. I think I'll say "Hope that the odds are in your favor."


Research Comparisons

I’m not sure how to ask this question, so I’ll do the best I can.  The research company our radio station hired just presented the results to a telephone perceptual study they conducted for us.  In the presentation, the researcher continually compared the results of our study (we’re in a medium-sized market) to other studies in other markets.

 

In most cases, the comparisons were in reference to rating scales using a 10-point scale.  For example, he would say that we have a 5.2 average score, but other stations similar to us received average scores of 6.5 or 7.2, or something like that.  The researcher would then say that we “are below” the other radio stations’ ratings.

 

I have been reading your column for a while, and because of that, have started to question a lot of stuff about research.  To use what you say, I thought, “Something don’t be right” here, but I’m not sure what that something is.  Am I right or wrong?  Can you help me? - Anonymous

 

Anon:  You’ll notice that I edited your question quite a bit.  You had information that I consider proprietary and I’m sure you wouldn’t want others to see some of the things you said.  I don’t think I changed the meaning of your question, but please let me know if I did.

 

To your question . . . Yes, something don’t be right.  And the something is that your researcher should not have compared your average scores to other radio stations’ average scores.  The only way to make such comparisons is to convert the 10-point scale average scores to z-scores.  In the example you gave, the researcher is comparing apples to oranges, and the comparisons are completely meaningless—100% irrelevant.

 

The respondents in your study cannot be grouped with the respondents from other studies in reference to how they answer questions (or rate items).  Converting the averages scores to z-scores puts the data on the same level, or the same metric to use the statistical term..  Your researcher was wrong.  You are correct.  Good work.

 

Tell your researcher that you’re interested in the comparisons, but that you want him to convert your data and the data from the other radio stations he used for comparison to z-scores.  Demand it.  If your researcher has a problem with that, tell him to give me a call.


Research Conclusions

After having read your article for several weeks now, I get the impression from the research projects I've been involved in, that some decisions are made based on conclusions drawn from perceptuals without having those conclusions themselves tested. Would that be a fair statement? - Andy

 

Andy: If I am reading your question correctly, my answer is this: If you went through the correct steps in designing your project and you have the right questions, you should be able to make decisions based on your perceptual. That's the goal of a perceptual study—to collect the "final" information you need to make decisions—your study, by its design, should test your conclusions. If not, the study wasn't designed properly and you'll have to conduct some type of follow-up study.

 

I'd like to add two things about perceptual studies:

 

1. If your questionnaire is designed correctly and the data are analyzed with the appropriate statistics, you should never have to "interpret" what perceptual data "mean." The meaning should be apparent and obvious—you should only have to read.

 

2. Make sure that you look at all of the tables in a perceptual study before you make any decisions. One table alone is rarely enough to make a decision. Again, if the questionnaire is designed properly, you will have several "interlocking" questions that together will provide the information you need.


Research: Conflicting Results

I received results this week of both a phone-based perceptual study of my station's core AND an auditorium music test. The perceptual suggested that a particular style of music was not rated highly by listeners, yet the music test showed dozens of titles from that segment with strong scores. What do you think? – Anonymous

 

Anon: This is going to be tough to answer because I don't know a lot about your research projects. For example, I don't know:

These are some of the things I don't know, but I'll do the best I can. If you have additional questions after reading this, please let me know.

 

OK, here we go . . . Regardless of how you tested the styles of music—you tested styles, not individual songs. This gave you an indication only of how much the listeners in your market like that style of music (as you described it). It gave you no information about individual songs. That's why you do a music test.

 

(Aside here . . . Any words radio peeps use to describe a format or style of music may mean something entirely different than what the average listener thinks. For example, the Classic Rock ‘tag' means something specific to radio peeps. The name defines a particular type, style, and era of music that is played by a specific list of artists. The problem is that the songs that radio peeps say ‘fit' Classic Rock may not ‘fit' in the minds of the listeners. Again, that's why you conduct music tests—to find out what the listeners say.)

 

Back to your question. Assuming that there were no sampling problems, or problems with the rating scales, or problems with any of the research procedures, my guess is that your differences in the style rating and the song ratings were caused by one or a combination of these items: (1) the format descriptor you used in the perceptual was weak or incorrect; (2) you found the truth that listeners don't really like the style even though they like many of the songs within the style; and (3) whatever format (style) you tested in the perceptual is not perceived as a ‘pure format' by your listeners.

 

Now, let's assume that your artists or hooks in your perceptual study were absolutely perfect. You got a rating for a style of music and you got ratings of songs in your music test. In other words, the format sucks, but some of the music in the format is great.

 

What would you have said if the style rated highly in the perceptual, but several of the style's songs tested poorly in the auditorium test? My guess is something like, "Well, I guess we don't play those songs!"

 

What do you do now? When all else fails, follow the directions. In this case, the listeners are giving you the directions. They might not want the format as you described it. Maybe they want (using Classic Rock again) a Classic Rock that includes some newer music. Maybe they are giving you directions that your station should be a ‘Neo-Classic Rock' radio station. I don't know because I can't see your data.

 

However, I have to say that I'm still troubled about not knowing how you tested the styles of music. I don't know if you named a format and read artists (or played hooks) or if you just read artists (or played hooks). If you labeled the music, maybe your label didn't match the artists or hooks you played. Maybe your definition of the style did not relate in any way to what the listeners think.

 

Your problem identifies why it is necessary to ask significant questions in a variety of ways. If you conduct a format search, you must also conduct a music test for that format so that you know the limits of the format as perceived by the listeners. If you don't conduct a music test, then you're back to the old days of giving listeners what you think they need.

 

In addition, just because you (I'm using ‘you' to include anyone at your radio station who is interpreting your research projects) say that the style ratings don't match the song ratings doesn't mean that you are correct. The listeners might be correct—they might be correct in saying that your style description wasn't very good. Or they might be correct in saying that they don't like the style, but some of the music is good. That's the best I can do without seeing the data.


 

Click Here for Additional R Questions

 

A Arbitron B C D E F G H I J K-L M
Music-Callout Music-Auditorium N O P Q R S T U V W-X-Z Home

All Content © 2015 - Wimmer Research   All Rights Reserved