Perceptual Study Conference Call Preparation

What is the best way for a programmer to prepare for the first "questionnaire design call" with the researcher? Ė Jim


Jim: While your question may seem commonplace to some people, itís actually a very good one. It is especially good for new PDs who donít have a lot of experience in the design phase of research projects.


Before I get to the answer, we need to have a definition of the word "research." In the research book I wrote with Joe Dominick, we define research as an attempt to discover something. (Wimmer & Dominick, 2014). This definition is the basis for the answer of what a programmer should do to prepare for the first questionnaire design call with a researcher.


On to your question . . . I donít think that a PD should prepare only for a conference call with the stationís researcher. This process should be an ongoing process. By that I mean, a PD should prepare for a conference call every day of the year, not just a day or two ahead of the call. I havenít met a PD yet who doesnít have dozens of questions about the programming, formatics, the listeners, and the station in general. I always suggest to PDs that they keep a notebook to write down things they would like to know that would help them make decisions. Any PD who does this will have a long list of questions for the researcher when itís time for the questionnaire design conference call.


A PD should take an active role in the development of the questionnaire because he/she is at the station every day and sees and hears all sorts of things. The researcher needs to be told about these things in order to develop a relevant questionnaire. When it comes to your conference call, the worst thing you can do is to say to your researcher something like: "Just design what you think we need." If your researcher agrees, youíre in trouble. A researcher who already knows what you need in your questionnaire means that he/she will use a template from a previous studyóand you donít need that. You need a questionnaire designed for your station and your needs.


Your participation in helping design the questionnaire for your station should include three steps. You have responsibilities before the call, during the call, and after the call. Let me explain.


Before the call. As I mentioned, keep a notebook that includes all the questions you would like to have answered. Not all of these will be significant, and you may have too many to include in a research project. But keep the list anyway. Send your list of questions to your researcher several days before the conference in some type of priority. By the way, donít worry about putting your questions in proper "research" form. Thatís your researcherís job. A question on your list may be as simple as "How many stop sets should I have?"


Another thing to do before the call is to review any research your station has conducted in the past. Are there any questions you would like to ask again? Are there any questions that were a waste of time? Are there questions that could be changed a little to get better information?


Next, get a good idea of the overall design of the study. Before your conference call, you should have already determined who you would like to interview (the screener questions). This includes things such as age, sex, listening habits, station cume, fans (P1s), and any other qualification that is important to you. More than likely, a lot of time on the conference call will be spent on the screener questions. The more you know ahead of time, the easier this process will be during the conference call.


Finally, you should be able to explain the goal of the project. What are you attempting to answer? Why do you want to conduct the study? What do you hope to find out? What are you going to do with the data?


During the call. Be prepared to tell the researcher everything about your station and every station in the market so that the best questions can be developed. Donít be afraid to "fight" for the questions you think are most important. Keep in mind that you know the most about your station. For example, if you absolutely must have information about how many songs in a row your listeners want to hear, make sure the topic is addressed in the questionnaire.


After the call. When you get the first draft of the questionnaire, read through the whole thing without making any comments, corrections, additions, or deletions. Just go through it. Do all of your editing on the second read-through. The reason I suggest this is that you may waste a lot of time on something on page 2 that is addressed on page 7.


When it comes to conference calls to design a research project, there are bad calls and there are good calls. The bad calls are usually created by:

A good conference call will include discussions of these items, and probably in this order:

 

1.   The goal(s) of the project.

2.   Whatís happening at the station and in the market.

3.   Screener development (who will be interviewed).

4.   The major question areas or modules (morning show, programming elements, etc.).

5.   Specific information needed to help make decisions.

6.   Timing (questionnaire design, when the study will be in the field, approximate date for presentation of the data).

7.   (Not required, but usually included) Jokes, stories, and gossip.


Recall that research is an attempt to discover something. The conference call you have with your researcher is the first step in the process. And by the way, I have one additional thing I always tell PDs about research: "If you donít understand something about the research project or the procedures involved, just ask." Do not, under any circumstances, hang up the phone after a conference call and think, "What the heck was that all about?"


Perceptual Study (Conducting)

Doc:  When is it appropriate to do a perceptual study?  Can I conduct one myself?  Where can I go to find out how?  What is the appropriate number of people in a perceptual?  Thanks. - The "Great One"

 

TGO:  The appropriate time to do a perceptual study is when you need information to make decisions.  The only time it's not good to conduct a study is around major holidays because you'll have too much difficulty contacting respondents.

 

Can you conduct a study yourself?  I don't see why not.  There are dozens of articles and books that explain all the steps involved in conducting research.  However, here are a few of the basic steps:

  1. Purpose.  Determine the goals of the study and the type of person you want to interview.  The sample size depends on the goal(s) of the study, the amount of money in your budget, and the amount of sampling error you are willing to accept.  A typical sample size is 400, which produces a sampling error of Ī4.9%, but you can use any sample size you want.  If you plan to make any decisions from the results of your survey, you shouldn't use a sample less than 400.  For more information, click here.

  2. Respondents.  Determine the screener questions you will use.  That is, identify the type of person you want in your study (male, female, age, radio listening, and so on).  If you use too few screener questions, almost anyone will be able to participate and your information probably will be virtually useless.  If you use too many screeners, you face the likelihood of looking for a "needle in a haystack," and the results will have limited use when generalizing the data to the population from which the sample was selected.  For more information, click here.

  3. Questionnaire design.  This is one of the steps in research that non-researchers think is very simple ó just ask the respondents a bunch of questions.  Not so.  The quality of the questionnaire will make or break your study and you need to know what you're doing.  There are many, many rules to follow in designing a questionnaire.  Things like, no compound questions, no leading questions, no ambiguous questions, no unanswerable questions, and so on.  The questions must be in logical order to take the respondent from the easiest stuff to the hardest stuff.  In addition, you have to make sure that the questions you ask don't provide answers (or opinions) or questions later in the survey.  You'll also need to determine the type of scale you will use for closed-ended questions.  The 10-point scale works the best because most people have experience with the scale in their everyday lives.  For more information, click here.

  4. Pilot test.  You need to test the questionnaire for time and clarity.  The questionnaire should not exceed 17 minutes or you'll have a significant number of breakoffs (people hang up before the end of the survey).  The test will also identify if there are any problems with the questionnaire ó wording, question order, ambiguity, and so on.  At this point, you may decide to change some open-ended questions to close-ended (or forced choice) questions.  For more information, click here.

  5. Sample purchase.  Unless you have some neat way to collect about 45,000 random telephone numbers, you'll need to buy a sample.  There are several survey sampling companies on the Internet that can provide the numbers you need.  The cost is usually about 10Ę per name or number.

  6. Data collection.  Hire a field service to make your calls, or make them yourself.  Depending on the incidence (the difficulty in finding qualified respondents), you can expect to make between 15,000 and 45,000 dialings to complete a sample of 400.  For more information, click here.

  7. CATI.  All professional field services use CATI (Computer Aided Telephone Interviewing), where the questionnaire appears on a CRT and interviewers enter the respondents' answers on a dumb terminal.  The CATI approach requires that your questionnaire is programmed into CATI language.  You can do this yourself (not recommended), or you can pay the field service programmer to do it (recommended).  For more information, click here.

  8. CPI.  If you hire a field service, your CPI (Cost Per Interview) depends on your incidence and the length of the questionnaire.  If your questionnaire is short (17 minutes or less), and respondents are easy to find, your CPI will be fairly low (about $20 per respondent).  If your questionnaire is short or long, and respondents are difficult to find (too many screeners), you'll pay about $75 or more per person.  For more information, click here.

  9. Data verification.  When the interviewing is finished, you need to verify the data to ensure that things are correct.  The typical approach is to call back about 10% of the respondents to verify their answers.  For more information, click here.

  10. Code the questionnaire.  If you use forced-choice questions, coding won't take long.  However, if you use open-ended questions, you'll have to develop a codebook for each one (a content analysis of the respondents' answers to develop a coding scheme).  Coding open-ended questions may take many, many hours.  It depends on your experience and if you have some useful software that helps with the process.  For more information, click here.

  11. Analyze the data.  There are several good data analysis packages available.  The good ones cost a few thousand dollars.  If you used a CATI program, you'll be able to download the data (respondents' answers) directly into your data analysis package.  If not, you'll have to enter the data by hand, which usually takes several days.  When the data are in the analysis package, you can run any statistical analysis that will help you make sense of the information ó crosstabs, ANOVA, correlation, t-test, Chi-Square, Factor Analysis, and so on.  For more information, click here.

  12. Data presentation.  Either use the tables prepared by the analysis software (usually not very easy to read), or download the data into a spreadsheet, such as Microsoft's Excel.  If you don't have any experience doing this, you can figure on about 40 hours to convert your data analysis software tables to Excel (readable) tables.  For more information, click here.

  13. Report.  If you don't plan to have anyone else look at the information, you probably won't have to write a report.  However, if anyone else is involved, you'll need to write a summary of the study.  For more information, click here.

Those are some of the basic steps.  Let me know if you have any other questions.


Perceptual Study Focus

I have an AC and a Hot AC station in my cluster.  I want to do a perceptual study since we haven't done one in at least three years.

 

One research company says to do an AC only study by only talking to people who cume all variations of AC stations in the market.  Another research company says look at the whole market to get information that reflects the population and inside the study focus on the various AC only issues.

 

If I get to do this kind of research once every few years, what's the best approach to take?  If I get one thing out of this research, I need to know what is the best music formula for each station.  What approach would the Dr. take?

 

One last thing, what sample size do I need for an 18-54 demo spread knowing I have music pods to look at?  - Anonymous


Anon: You haven't done a research project in at least three years?  That's too bad.  Over the past 30+ years conducting research for radio stations, I have seen that it's best for virtually all radio stations to do at least one perceptual each year.  I have also seen that the "budget" (I put that in quotes because the budget is usually used as the excuse not to do research) doesn't allow radio stations to do research every year.  You know . . . similar to Coke, GM, Microsoft, and other companies who don't do research because of their budget.  OK, off the soapbox and on to your question . . .

 

Let's look at the two options the research companies suggested:

 

Option 1:  AC only study by only talking to people who cume all variations of AC stations in the market.

 

Among the first few questions in this type of study, respondents would be asked which radio stations they listen to during a typical day, week, or some other time period.  If they name any AC radio station in the market, they qualify for the study (as long as they pass other screener questions in the questionnaire).  So, what does that type of screener setup give you?  Well, it obviously gives you the respondents who listen to AC radio stations.  But what does this approach leave out?

 

The approach prevents respondents who listen to AC music, but don't listen to any of the AC radio stations in the market for one reason or another (they don't like the radio stations, they don't know they exists, and so on).  Uh-oh.  We have a problem here, and it's a problem I have seen repeated countless times during my career.

 

If a research project to investigate a specific format is designed to include only respondents who listen only to radio stations in that format, the project will automatically exclude respondents who like that format, but don't listen to the radio station, or stations.  See what I mean?

If the goal of your project is to find the potential for your AC and Hot AC radio stations, screening respondents in (qualifying them) based on radio stations listened to is not going to give you the information you need because you won't be looking at all the people who like AC or Hot AC music.

 

A better approach is to screen people in by having them rate several types AC music.  If the respondents rate the correct types highly (whatever you decide "correct" and "high"), then they qualify for the study.  That approach will provide you with the information you need, and you can then have your research company determine how big the audience is for AC and Hot AC by referring to the call disposition sheet for the study.

 

Option 2:  Look at the whole market to get information that reflects the population and inside the study focus on the various AC only issues.

 

I think this is basically what I just discussed, but I'm not sure what "whole market" means.  If this means to include only people in the target age group who listen to AC or Hot AC, then that's OK.  If not, then I need more information.

 

Finally, you ask, "What sample size do I need for an 18-54 demo . . .?"  First, in most cases, you shouldn't have to use more than 400 respondents for a radio perceptual study.  This sample size has a sampling error of about plus or minus 4.5% and that's good for a behavioral research study.  If you plan to break out the data in a bunch of small cells, then you may need to use more than 400 respondents.  Your sample size should be determined by how you will analyze the data, but I rarely find situations where more than 400 respondents are needed.

 

An 18-54 demo?  That seems a broad for an AC/Hot AC study.

 

From the information you have given me, it sounds as though you need three pieces of information: (1) The potential or AC and Hot AC in your market; (2) What these listeners want from an AC or Hot AC radio station; and (3) How your radio stations compare to what the listeners want.  These things, along with phantom cume, cume-to-fan conversion, percentage of listeners you are getting from the universe of AC/Hot AC listeners, what you need to do to get more listeners, and other things your research company will include, will give you want you need.


Perceptual Study (Music Montage): Second Opinion

Doctor:  In medicine, when dealing with something serious, it's often good to seek a second opinion, right?  In case that's true in research as well, I wanted to run this by an expert like you. (Kissing up is how I roll, sorry.)

 

We're doing a perceptual study for our market to find out who the most likely audience is for our format.  This will involve playing a hook montage that best represents our format.  If the responder rates the montage a 4 or a 5 (on a 5-point scale, where 5 is "most favorable" and 1 is least), we continue to dive deeper into the survey with them.

 

If the respondent rates 3 or lower, we move on to some other things (not involving any more montages) and quickly wrap up.

 

I asked our researcher what difference it would make to include those who gave our montage a "3. " (I understand why 1s and 2s might be a waste, since they're disinclined to us from the start.)  My thinking being: if someone rates the format a 3, they don't HATE us, and as we dive deeper into the survey, their additional answers may shed some light on how to optimize our format so that they could perhaps score it at a 4 or a 5 in the future.

 

Our researcher contends that anything lower than "5" won't provide a good foundation to draw conclusions.  Even a "4" is sketchy in this person's mind, since existing passion and familiarity are waning at anything under a 5.

 

He makes a good point. I thought mine was decent too.  What's your take? - Anonymous

 

Anon:  Your question involves two items, not just one.  Let's first deal with the "5" rating on the montage.

 

Like most things in life, there are a variety of approaches in research to collect answers from respondents.  Therefore, there is nothing wrong with your researcher's idea to use only the respondents who rate the montage as a "5," but I don't use this approach.  Why?  Because I think using only the respondents who rate the montage the highest rating is restrictive for several reasons, including, but not limited to: (1) Some people are "tough" raters and rarely rate anything with the highest number even though the item rated may be their favorite, so it's possible that some people (I can't tell you how many) who consider the music their favorite may rate the montage as a "3" or "4;" (2) The theory behind using only the highest raters is that these people are, supposedly, P1s (fans of the radio station; people who listen most often to a radio station or the type of music represented in the montage), but this may not be true because of the differences in how people rate anything, including music montages; (3) Research that involves humans needs to be flexible because of the wide variety of perceptions people have about anything.  Sure, it's easy and "clean" to select only those who rate the montage as a "5," but my experience in radio research during the past 30+ years indicates that it's best not to guess at how respondents will answer any question.  With that in mind, there should be some flexibility in what you accept or don't accept as good or bad or usable or not.

 

Now, because I don't like to guess at what respondents will or will not do, believe, or perceive, in your study, I think it would be wise to include respondents who rate the montage a "4" or "5," but also accept 20 or so who rate the montage as a "3" to see how these people differ from those who rate the montage as a 4 or 5.  You can do this by creating banner points for each of these ratings.  The banner points will allow you to compare each group to the others to see if there are any differences.  My guess is that you will find some very interesting information about the three groups of people you can use in programming your radio station.

 

As I said, there is nothing wrong with only allowing respondents who rate the montage as a "5," but I tend to use a conservative approach in all research projects and don't assume that all people will perceive any rating scale in the same way.  I would rather err on the side of conservatism than possibly artificially eliminate qualified respondents.

 

Now, on to the second item . . .

 

I know that many researchers use music montages to qualify respondents for research studies, but music montages have problems.

 

1.  When I used music montages a few decades ago to qualify respondents, I noticed that the incidence was very low.  Incidence is the percentage of people contacted who actually qualify for a study.  I asked the interviewers to call back several respondents who rated the "target" montage lower than what was required to qualify for the study.  What we found was that a large percentage of the people (I think it was about 30%) who didn't rate the montage high enough to qualify for the study were actually P1s of the client radio station.

 

What?  How can that be?  The montage included four or five songs selected by the PD as representative of the radio station's playlist.  We thought, which seemed logical, that a representative sample of songs a radio station plays should be "recognized" or "identified" as representing the music the client radio station plays, but in reality, it doesn't work that way.

 

The reason it doesn't work that way is that the montage may include one or more songs the respondents: (1) Don't like; or (2) Don't recognize.  These were comments from the respondents in follow-up interviews.  In addition, we found that many of the radio station's P1s didn't identify the client's "target montage" as representative of their favorite radio station.  Over the years, I found that as high as 65% of a radio station's P1s don't rate a "target" montage as highly as we would expect.  That's a lot of people to artificially eliminate from a research study.

 

Frustrating, yes, but that's the way it is.  While PDs, consultants, or anyone else who develops a music montage that supposedly represents a radio station, the listeners (only the most important part of the equation) may not agree.  So what do you do?

 

1.  Because music likes and dislikes are so volatile, I found the best approach is to avoid using music hooks (montages) to qualify respondents.  Instead, I use artists, and ask a question like, "I would like you to rate different types of music by reading short lists of music artists.  For each group, lease tell me how much you like the music by the artists by using a scale of 1 to 10, where the higher the number, the more you like the music as represented by the artists.  Please rate the music as a whole, not a specific artist."  Then the respondents are asked, "How much do you like music by artists such as . . . (three or four artists are read)."


2.  What this approach does is eliminate the volatility of individual songs.  Respondents may "love" a certain artist, but they may not "love" all of the songs the artist performs.  If one of the "hated" songs is included in a music montage, the montage as a whole may be rated lower than expected.  This doesn't happen when artists are included.

 

3.  I always include at least two groups of artists the client radio station plays, and sometimes three groups.  I do this to reduce the possibility of making an error in the artist lists.

 

4.  I always ask the respondents which radio stations they choose to listen to during a typical week.

 

5.  Respondents can then qualify for a study in two ways: (1) By rating one of the artist groups with a certain rating (usually 8, 9, or 10 since I always use 1-10 scales); or (2) Naming the client radio station in the radio listening question.

 

In summary, you can use the music montage approach, but I would suggest being more flexible in who you allow into the study.  However, I suggest the music artist approach over the music montage approach.


Personalities - Problems with Them

In the research you have done, what are the main complaints about radio personalities? - Anonymous

 

Anon: Good question. I didnít review my files for an the Top 10 or anything, but four items come to mind as most often mentioned by listeners:

Radio personalities must remember that they are invited into a personís home, car, or office. If the listeners feel the personalities donít relate to them, they will go somewhere else. Radio personalities who feel they are granting the opportunity to people to listen to the "golden talent" have it all wrong. The listeners are granting the personalities the opportunity to inform and entertain them. If the personalities donít match this expectation, the listeners will leave.

 

The title "Radio Personality" does not automatically grant an audience. The radio personality must earn listeners. Case closed.


Personality Ratings

I have a lot of difficulty dealing with my morning show team (Iím the PD). What it comes down to, I guess, is that they donít trust my judgments/opinions about their show. I may criticize something or make a suggestion and they look at me as if I were an idiot (which I may be, but Iím pretty good at my job). Anyway, is there any research approach I can use to help me with their show? Maybe some independent information would get them to listen. - Anonymous


Anon: Youíll notice that I did a major edit on your question, but I donít think I changed the content. Le me know if I did.


You didnít mention how long you have been the PD at your radio station. Sometimes rookies have difficulty with the "old-timers." If thatís the case, then maybe the trust in your judgments and opinions will develop over time.


However, you asked about research help. There is a way to do what you are looking for. Iíll explain this from my own approach. If your radio station uses a research company, ask someone there to help you with your problem.


Over the past several years, I have developed (with the help of PDs and consultants) a list of characteristics and qualities for jocks, talk show hosts, and other on-air personalities. The procedure is very simple. In a telephone perceptual study, listeners rate the importance of each of these elements. The listeners also rate the radio stationís personalities on the same list. The only task, then, is to compare Overall ratings to the individual ratings. You have an instant checklist that any personality pays attention to because they are rated against an ideal set by the listeners. The process makes your job very easy.


Persuasion - 5 Stages

I know you have answered this before, but I canít find the copy I made.  Would you please explain the 5 Stages of Persuasion/Communication?  I need it for a speech Iím going to do.  Thanks. - Anonymous

Anon:  The article I wrote is in the ďReadingsĒ section on my business website or you can click here: 5 Stages of Persuasion.


Persuasion and Research

I read your lengthy article about Persuasion and Communication some days ago. Basically, one of the concepts in short was to first ask people what they want, then give it to them, and finally tell them you gave it to them. However, it seems to me there may be some problems doing this:

 

1. Some people simply donít know what they want. If you ask them, theyíll tell you anything they think could be good in general, but that doesnít necessarily mean they want it for themselves.

 

2. When people get what they want, sometimes itís still not what they expected. When they donít have it yet, theyíve got another picture of what it is than when they actually experience it.

 

Letís take two examples:

 

1. When someone is asked which songs and artists theyíd like to hear on radio, they say "Well, I have no idea, but Frank Sinatra would be good, since he was a famous man." (Thatís probably why you play actual song hooks to the people to rate instead of just naming titles and artists.)

 

2. When, following that study, a radio station actually plays Frank Sinatra, those same people hear that his music is old and outdated, and that they donít want to hear it anymore. So, when the station says "We played Frank Sinatra just for YOU," they wonít be able to appreciate that as much as the managers thought.

 

Which means, do people really want what they say they want? Or, keeping to the example, do you simply have to account for errors by playing a bit of "Sinatra music" for people to realize what they DONíT like? Or is it wise for station managers to use some common sense and assume that thereís something they say they want, but they donít know that they really DONíT want it?

 

Complex question, isnít it? - Anonymous

 

Anon: I obviously donít know who you are, but you seem to have a good eye for details. I commend you for your thinking.

 

Actually, I donít think itís a complex question. What you have done (maybe unknowingly) is prove that research isnít easy. Research isnít simply throwing questions in the faces of the respondentsóresearch takes a lot of thought and planning.

 

You are correct in saying that people often cannot verbalize what they want and itís easier for them to verbalize what they donít want. Thatís the job of the research. Simply asking people "What kind of music would you like to hear on the radio wonít work." You may get answers like, "Frank Sinatra." Merely asking people what they want will usually get you nowhere.

 

Youíre also correct in saying that sometimes after people are disillusioned after they get what they say they want. If thatís the case, then the original research questions werenít asked correctly and they wasnít enough follow-up.

 

So in both cases, itís the job of research to wade through all this stuff. Itís accomplished by asking the same question in a variety of ways. Itís accomplished by using several different samples of respondents. Itís accomplished by using several research methods And itís accomplished by thorough follow-up questioning. Your example of Frank Sinatra demonstrates the research problem of "Method-specific results," and thatís not good.

 

I understand that your example of Frank Sinatra is only one example and that itís a rather broad application of the process, but a professional researcher would not ask that question. Or if the question was asked, it would be followed by several follow-up questions. (And yes, you are correct about testing songs in a music test, not just titles and artists. Only testing artists and titles will get you nowhere.)

 

Hereís a simple non-radio example that demonstrates what Iím talking about and what you highlighted. Assume that you want to ask a person out for dinner. You ask, "Do you like seafood?" The person says, "Yes." You have now found out what the person wants. Friday night comes and you take the person to the best seafood restaurant in town. When youíre ordering your dinners, your date says, "Iíll just have a House Salad." Youíre confused. What happened? You say, "I thought you liked seafood?" The person says, "Thatís true, but only on Saturday."

 

Thatís a silly example to demonstrate that you didnít ask enough questions.

 

The process of finding out what people want and giving it to them is the only way to succeed. But as you have pointed out, the "finding out" part takes skill. In addition, the "give it to them" part isnít simply taking research data and applying it without any consideration. When it comes to giving people what they want in radio, it takes the talent, creativity, and skill of the radio station management. Research alone will not do that.


Click Here for Additional P Questions

 

A Arbitron B C D E F G H I J K-L M
Music-Callout Music-Auditorium N O P Q R S T U V W-X-Z Home

All Content © 2015 - Wimmer Research   All Rights Reserved