I’m a new PD for an AC radio station and we recently conducted a perceptual study. The sample included two groups of females, 25-34 and 35-44. During the presentation, our researcher (the consultant agreed) continually mentioned that there are major differences between the two age groups of respondents.
I am not a research expert, but I couldn’t see (and still can’t) that there are major differences between the two groups. Some of the numbers seem to be different, but overall, the two groups seem very close to me. I asked about this during the presentation, but the researcher just kept saying that there are differences (“Trust me, these are big differences.”) and looked at me as though I was trying to cause problems.
My question is this: Is there some type of statistical thing that could be used to find out if there are major differences between the two groups? It’s not that I don’t trust our researcher and consultant, but I like to see some type of proof, not just, “Trust me, these are big differences.” - Anonymous
Anon: As you’ll notice, I edited your question to eliminate some proprietary information you included. I don’t think I changed the meaning of your question, but please let me know if I did.
The easiest statistical “thing” that could be done to show if there are significant (statistically significant) differences between the two groups is a simple t-test. You have a few options here:
Ask your researcher to conduct t-tests on the questions you are concerned about.
If you know how to do t-tests, you could do them yourself on a spreadsheet like Excel, but that would probably take a long time because you would have to enter the data. (I’m sure that you only have tables and not the raw data files.)
I agree with you in that “Trust me, these are big differences” isn’t a good summary comment. A few t-tests, which are easy to conduct, will quickly support or refute the argument.
By the way, have you discussed your concerns with your GM? That might be a good idea.
You cite 20+ years of experience conducting research for both media and non-media companies. Do you know of any figures on how much broadcasters spend on research compared to other industries? The impression I get from experience is that radio spends a lot less than most other industries. At the same time, it seems as though audience statistics would be among the most expensive to obtain. For example, if I want data on consumers of Ford trucks, Wheaties, Compaq, etc., that information is readily available in a reliable form. Consumers of tangible products leave a data trail that is much simpler to collect than media usage. Wouldn't that dictate that broadcasters would be forced to spend more on the average than other industries? - Dave
Dave: You ask several questions. I'll do my best to keep this short. If you have any other questions after you read this, please let me know.
1. Although there may be a source somewhere, I can't tell you how the radio industry research expenses compare to other industries. In fact, it's close to impossible to know how much is spent in radio research. The dozens of private companies that do radio research will not divulge their revenues; publicly held companies aren't much different. In addition, radio companies are not consistent in how they record research expenses on their financials. I'm sure there is a way to make a guess, but that would take a long time since there are so many radio companies (soon to be not-so-many radio companies.)
The best I can do is take a guess based on my experience. And my experience shows that non-media companies spend proportionately more on research than does the radio industry. Yet, there is a huge difference even within the radio industry. Some radio companies spend very little on research. Other companies (irrelevant of market size) never make a major decision without conducting research and, therefore, spend a lot on research. Some radio companies consider research an "expense," while others consider research as an "investment." These research "investors" do not like to make decisions without first consulting with their audience via some type of research project. And guess what? My experience indicates that the "research investors" are the most successful companies because they make educated decisions, not guesses.
I know, I know….you may say I'm biased because I'm a researcher. Know what? I'm biased as hell because I know that information is power and I have seen dozens and dozens of stations do a double gainer into the porcelain receptacle because the management did not think that asking the radio station's audience was important—they knew what the audience wanted. They didn't want to spend money on research and are absolutely astonished when the Arbitron shows chicken scratches (<<). When that happens, you know what the next step is—fire the PD. That makes as much sense to me as trying to make popcorn in your refrigerator.
2. I have to disagree with you somewhat about your analogy of non-media companies getting information about their sales figures. Those figures are their "Arbitron." But that data only tells them how much they sell. The data say nothing (just like an Arbitron number) about why people buy a Ford, or a box of Wheaties, or a Compaq computer. These companies do a massive amount of research on top of all the sales data to find out "why" their customers behave the way they do. And without giving away any proprietary information, I can say that you would not believe some of the research that non-media companies conduct about their customers—the successful companies leave nothing to chance. (Keep in mind that there are one or two radio companies—I won't name them—that operate the same way.)
With all that, my educated guess is that non-media companies spend a greater portion of their revenue on research than do radio companies. In addition, non-media companies conduct research on a regular and systematic schedule, not only after they have a "bad book." I don't think that radio research is any more expensive than non-radio research. Non-media companies have to find out the same answers—who, what, why, where, when, and how. They just do it more often and in greater detail.
Keep in mind, please, that there are a few radio companies that make even the biggest non-media companies look like rookies when it comes to research and finding out what the audience wants. My comments here refer to the radio industry in general.
I saw your presentation in San Antonio but didn't write down your definition of research. Would you print it here? - Anonymous
Anon: I gave the definition I use in my research book (written with Joe Dominick from the University of Georgia):
Research is an attempt to discover something.
As I said in the San Antonio presentation, research doesn't always mean hiring a professional research company. We all do research everyday for almost every decision we make. For example, you do research to find out how hot the water should be in the shower, how fast you need to walk across the street so you won't get hit by a vehicle, and how loudly to speak to someone so he/she can hear you.
Why don't many radio stations have research directors and where does one go from here? - Anonymous
Anon: Why don't many radio stations have research directors? How's this…
Unlike TV managers who understand the need for a research director who provides sales, programming, and other areas with decision-making information, most radio managers do not perceive the same need. Radio has forever lagged behind TV in reference to research. TV stations conduct a wide variety of research, such as tracking studies. Radio stations do not. TV stations test their promos and other informational elements. Radio stations do not. And the list goes on and on. Nearly all TV stations have research directors.
There is no money in the budget for a research director because radio managers don't perceive the need and no money is allocated for the position.
Many of the top-level radio managers (the CEO types) still believe in giving listeners what they think they need instead of giving them what they want. As long as this continues, radio will be behind the 8-ball and the position of research director will not be considered important. With some exceptions, most radio stations will continue to flop around like a trout out of water because management wears blinders.
Most radio managers consider research as an expense; nearly all TV managers consider research as an investment.
A GM at one of the largest radio stations in the country recently told me that he asked his CEO for research money and he was told that he had "…no money for research…that's why we pay you." There ya go.
Where does one go from here? I'm not sure what you mean with this question. Where do you go if you're interested in research? Or do you mean, Where does radio go if research directors are not perceived as important? I'm not sure, so I'll answer both questions.
#1. If you're interested in research, radio stations are not the places to look (for the reasons listed above).
#2. Non-ratings research became a serious business in the early 1980s. TV stations have viewed research as a necessity since the early 1980s. Most radio stations remain stuck in the 1970s (or earlier) when it comes to research. I don't see this changing until there are radio CEOs who understand that information is needed in order to run a successful business. I know of only two CEOs who fit this description.
Research Lag Time
I have heard several PDs and GMs say that radio ratings are delayed. They have said, "Any changes we make today, we will not see the results for another 6 months." So what they are saying it that any significant changes you make, for example, in a Spring Book you may not see the result of the change until the Fall Book. Have you seen this to be the case, and do you have any research to back that up? - Anonymous
Anon: I agree with what you have heard from PDs and GMs. The only thing I would change is the amount of time. It is actually closer to 12 to 18 months. In some cases, it may take a few years for radio listeners to learn something new. For example, I can recall a research project that showed that only after 8 years did "all" of the target audience know that the radio station changed its call letters. And they used them on the air all the time.
That's why I find it amusing when some new PDs get upset when the listeners don't know that the jingles were changed a month ago, or that a new morning show co-host was added a few weeks ago. Keep in mind that the average person simply turns the radio on to be entertained—if the entertainment they want is not on one radio station, they will go to another one. The average listener doesn't tune in to hear new jingles or liners, etc. They also do not understand "flow." They just don't. The sooner a person in charge of a radio station learns this, the more successful that person will be.
Research Plan - How Long Should We Wait?
We made several major changes to our morning show. How long should we wait to conduct a research study to find out how the show is doing? - LC
LC: The best way to get a handle on changes like this is to set up a panel study where you call back the same people over several months. This allows you to track changes in perceptions. However, you didn't mention anything like that, so I assume you're not doing it.
It usually takes about 6 months until all of your listeners know about changes on your radio station—sometimes longer. Therefore, I suggest that you wait at least 6 months until you conduct a study.
However, if you're eager to get some preliminary reactions, you can always develop a screener for focus groups or telephone study to look for people who know about the changes. This will probably be expensive, so plan for that.
Research Questions (Formulating)
How can I formulate a research question that is related to the media? Can you give me some examples, especially in areas that have not been touched or were merely glossed over? Thanks. - Lawny
Lawny: Your question indicates that you’re a student, so I assume you know the difference between a research question (a general question about the relationship between two or more variables used to conduct preliminary research) and a hypothesis (a testable statement about the relationship between two or more variables).
Let’s say that you are interested in investigating how many commercials radio listeners will tolerate before they change to another radio station or turn off the radio (a topic of recent interest in this column). Here are the two approaches:
Research Question: Is there a limit to the number of commercials adults (18-54) will listen to in a radio station commercial break before they switch to another station or turn off the radio?
Hypothesis: Adults (18-54) will listen to three commercials before they switch to another radio station or turn off the radio.
The list of available topics for you to investigate is virtually unlimited. The best thing for you to do is pursue questions that you think about. In radio, for example, you can investigate commercials (such as the example I listed), how long it takes people to learn about programming changes, ways to eliminate phantom cume, the relationship between contests and listening habits, and so on. What questions do you have about radio (or any other medium)? That’s the place to start. Trust me when I say that most questions you develop are “glossed over” in research.
However, if you need some additional help, I set up a few Internet searches for you:
Radio Research Questions
Radio Research Hypotheses
Research Questions & Hypotheses in General
It would be cheaper to use my station's database to recruit for an AMT, but how would it affect the research? If I want 50% P1s, what is the downside to recruiting them from the database? - Anonymous
Anon: I have learned many things since I started this column in January. Two of them are: (1) Simple-looking questions usually require the longest answers; and (2) I often have to make assumptions about what people ask.
In your question, you don't give me any hints about what you mean by "affect," so I'm going to have to assume that you mean "How would using my radio station's database for recruiting affect the validity (are you testing what you want to test?) and reliability (are the results consistent if the test is repeated?) of the data?" If I am wrong, please let me know.
First, I am assuming that your radio station's database is good. I have seen many databases that are made up of more than 50% "bad" names—they aren't really the radio station's listeners, the names and phone numbers are wrong, people no longer live in the area, and more. There have been many instances where PDs (or others) say that they have a database with "thousands and thousands" of names, but upon scrutiny, the "good" names shrink to just a few hundred.
To the answer . . . if you have a good database, there is nothing wrong (referring to validity, reliability, and adhering to the tenets (rules) of scientific research) with using it to recruit a sample for an auditorium music test, callout, or any other research. Consider these points . . .
If you follow the tenets of scientific research when conducting a small-sample research project, you are supposed to use a random sample of people. Recall that a random sample is one selected where everyone (or element) in the population has an equal chance of being selected. In reality, however, we never use random samples for any research. We don't because every person we select for a project doesn't agree to participate. Many people refuse and that's why it takes about 20,000 phone calls to complete a 400-person telephone perceptual study. The first time we encounter a refusal, the plan for using a random sample goes out the window.
So, we always use volunteer samples. We randomly select (or call) people from the population and hope they will agree to participate in our project. A database is helpful because if simplifies the recruiting process. A good database acts as a first step filter, or pre-qualifier, because it includes the names of people who already meet the specific qualifications you are looking for—such as your radio station's listeners.
For example, let's say that you want to conduct a project with your cume. You have two choices: (1) Randomly call or contact people in your market to search for your cume; or (2) Use your cume database that already includes these people. Is there a real choice there? No.
Now . . . the first major question that comes up is: But not all of our radio station's cume is included in our database. This is true. The database only includes a sample from the sample, so to speak. Is this a problem? It all depends on whether you subscribe to the theory (supported by many psychometric theoreticians) that if something (and idea, belief, etc.) exists, it will exist in any type of sample (random or volunteer) selected from the population.
For example, some people complain that research studies conducted with college freshman in an introductory mass media class aren't "good" because they include only college freshman in that class. But the argument posited by most psychometric peeps (and me) is that these respondents' ideas, perceptions, and behaviors are as real as any other respondents. If something exists, it should also exist with college freshman, college seniors, radio station PDs, or anyone else. Now, this doesn't mean that you only conduct one study. Scientific research doesn't work that way. The study must be replicated (repeated) with other samples in other locations. If the results are consistent with these different samples, then there is evidence of reliability.
Now . . . with that in mind, selecting respondents from your database (again I will say that I'm assuming it's a good one) follows the logic that if something exists, it should also exist with the people who are included in your database. If you want to verify this, conduct a study where you select 50% of the sample from your database and 50% randomly from the population. Then compare the results. I'll tell you upfront that I have done this many times in the past 20+ years, and you'll probably find that there are no significant differences. You can check this using a variety of statistical procedures.
So my answer to your first question is: While no research procedure is perfect, there don't seem to be any "affects" on your data if you select a sample from a good database.
Finally, I don't see anything unique about your interest in recruiting 50% P1s. The upsides and downsides are the same regardless of the type of person you're looking for.
Upsides to using a good database: Time and cost savings.
Downsides to using a good database: None.
By the way, even though you may think your database is good, I must add that even these people must pass your screener for your project. You cannot assume that all of the people in your database qualify for your project. The only thing your database does is provide a list of "pre-qualified" respondents.
One more thing: If I were in charge of a radio station, I would hire a person whose sole responsibility was maintaining the database. I'm dead serious. A good list of people could be used for a countless number of radio station activities (and research).
Research Sources - Entertainment Trends
What's up Doc? I am seeking guidance on where I can find the most reliable and recent research on entertainment spending trends in the United States. I want to find out how much is being spent, in what venues and what determines that choice. Thanks. - Anonymous
Anon: What’s up? I need to come up with a stock answer for that question.
One of the best sources for anything dealing with trends is the Statistical Abstracts of the United States. However, if that isn’t good enough for you, here are a few Internet searches (not all the references are relevant, but I think you’ll find what you need):
Even More Stuff
Is there a time of year that is best to conduct a research for our radio station (music tests or telephone perceptual studies)? - Anonymous
Anon: No because radio listeners develop and have perceptions of music and radio stations 365 days a year. However, it is best to stay away from major holidays like Thanksgiving and Christmas since response rates (the percentage of people who agree to participate) are usually lower during these times.
Research Timing - 2
I just started reading your column, so I'm sorry if you have answered this question before. My question is: When is the best time to conduct a perceptual study for our radio station? Thanks. - Anonymous
Anon: I'm glad to hear that you just found the column. Please feel free to ask anything. If I don't know the answer, I'll find someone who does.
Yes, I have answered your question before, but that's OK. Here is your answer . . .
The best time to conduct a perceptual study is when you need to have information to help you make decisions. In other words, excluding major holidays like Christmas and Thanksgiving, any time is the "best" time to conduct research. The same answer goes for music tests and personality tests—conduct the tests when you need to make decisions about your music or your personalities.
Research - "Too Researched?"
Is it ever possible for a radio station to be "over-researched"—researched to the point that it literally has all the life sucked out of it? Being so safe and sterile that you wouldn't add a current until it hit gold? When is it OK to take a chance on a new artist, feature, etc. if ever? - Anonymous
Anon: Just a second here. I need to get my soapbox. OK. Got it.
I have been conducting media research for more than 25 years and I wish I could count how many times I have heard this type of "over-researched" question. But that's OK. Repetition of the message is good.
Let's start at the beginning, and the beginning is to define the word research. In our book, Mass Media Research: An Introduction, Joe Dominick (University of Georgia) and I define research as, an attempt to discover something. That's it.
We don't say that research forces people to make decisions. We don't say that research is a bible that must be followed to the letter. We don't say that users of research are robots who do not have the ability to make decisions. What we DO say is that research is an attempt to discover something. Period. End of definition.
OK, now take that definition and plug it into your comment, which then becomes, Is it ever possible for a radio station to "over-attempt to discover something?" Does that sound logical to you? It doesn't to me. I don't think anyone can ever know too much about anything, whether it's radio or nuclear physics.
However, I need to consider the question as you wrote it, and you suggest that radio is over-researched to the point that, "literally has all the life sucked out of it." I have a clue for you, my friend. If the life has been sucked out of radio, it is not due to research. The life has been sucked out (assuming that's true) by those who use the research.
Look . . . a correctly conducted research study (one that follows the tenets, or rules, of science), merely finds out what listeners want by asking what they like, don't like, and all the other questions. The research consists of data presented in some form on a bunch of pieces of paper. The research itself doesn't make changes on a radio station. The research itself doesn't hire and fire personalities. The research only presents a summary of what the listeners said, think, perceive, and understand. The users of the research make the decisions, the hiring and firings, and so on.
So blaming the research for "sucking the life" out of radio is misplacement of blame (once again, assuming that the life has been sucked out), and, quite frankly, that don't be right. The blame should be directed—rightfully so—at the users.
Finally, to address the last two sentences of your question, research doesn't make things "so safe and sterile that you wouldn't add a current until it hit gold?" Research doesn't add currents. You do. And, "When is it OK to take a chance on a new artist, feature, etc?" Again, I would say those are your decisions.
I hope you understand what I'm saying here. Research is intended to provide you and other users with information to help you make decisions. The information allows you to consider the opinions and perceptions of your listeners before you decide what to do. Knowledge is everything, and the more knowledge you have to make decisions, the better your decisions will be.
Click Here for Additional R Questions
Roger D. Wimmer, Ph.D. - All Content ©2018 - Wimmer Research All Rights Reserved