Over the past two weeks, The PDAdvisor
has presented the first two parts of Roger Wimmer’s in-depth response
to several questions posed to him about Internet-based research.
In Part 1 (read>>),
Roger discussed the standards for "valid and reliable" research
and began looking at several issues related to developing a good sample.
Part 2 (read>>)
examined the sample size fallacy that "big = good" and the
sample control problems created by online research, while the sidebar
looked at the challenges of testing new music online. Roger
concludes the series this week by reviewing 10 keys to consider when
hiring an online research company or deciding to conduct Internet-based
music research in-house.
Online research can be done correctly. The Internet
has opened the doors to a variety of new and exciting research collection
methods. However, from what I have seen, the research is not being conducted
correctly and radio station people are using data that are, at best,
If you do decide to hire a vendor to collect information
via the Internet, or if you conduct the research yourself, there
is one requirement you must satisfy:
1. Know your respondents. I understand
that 100% respondent verification isn't possible in any type of research.
However, you must have some idea of who is answering your online
research questions. If you don't, then you shouldn't use the data.
When you have respondent verification (whatever amount
you have), you can check the validity and reliability of your data with
a few simple statistical tests. If you don't know how to conduct these
tests, find a researcher or statistician to help you run things like
a t-test, z-score comparisons, correlations, and standard deviation
If you satisfy the first requirement, then here are
a few other mandatory items:
2. If you hire a company to conduct
your research, make sure that the company has a researcher or statistician
(minimum Master's degree in research or statistics; Ph.D. preferred)
on staff, or at the very least, that the research method was designed
and tested by a researcher or statistician. If the company doesn't
have a researcher or statistician involved in the methodology, then
don't use the company.
If you conduct the research yourself, the same rule
holds — you should either have a Master's or Ph.D. in research
or statistics, or hire someone who does to develop and test your methodology.
3. Make sure that the measurement scale uses
at least 5 points. Anything less than 5 points will create
"factor fusion," in which the small scale "crunches"
the data into too few points and doesn't show enough variance. Although
a 7-point scale is good, I prefer to use a 10-point scale for music
ratings and perceptual ratings.
4. Before you begin to use your data for decision-making,
you need to run several statistical tests to determine the validity
and reliability of the data. You don't need to get carried
away here — t-tests, z-score comparisons, correlations, and analysis
of variance will do.
5. Do not fall for the large sample scam.
A large sample does not guarantee that the sample is correct.
Stay away from any company that sells its sample as
valid and reliable because the sample size is large. Hire a company
that sells its sample as valid and reliable because the respondents
go through appropriate screeners and are verified.
6. Do not use online research to test new
songs unless the respondent hears the entire song. We know if respondents
hear an entire song in an auditorium setting. We don't know how to do
that on the Internet.
If you figure out a way to verify that the respondents
did actually hear the entire song, please let me know how you accomplished
7. Do not use pure volunteers for your research.
All respondents must pass through your screener. This is true
for all respondents, regardless of the source of the list (e.g.,
your radio station's database).
8. Rotate your sample. As detailed
in Part 1 (read>>),
for specific types of research including listener advisory boards/panels,
replacing 25% of your sample for each study will eliminate the problems
of relying on only one group of people for your data.
9. Use z-scores to compare your online data
to auditorium tests, callout, or one online test to another. Do not
compare raw scores. That isn't valid or reliable. It's also
10. Do not lend your data to another radio
station or take data from another radio station and use the information
for your radio station. There is no guarantee that the data from one
market will relate to any other market. (If you know statistics,
there is a way to determine if you can share data with other radio stations.)
As I mentioned, online research presents many great
opportunities, but you must know what you're doing. If you use online
research, do it right.
If you have a research question for Roger, email
him at rogerwimmer@thePDAdvisor.com.
The PDA office has received quite a bit of
feedback on the first two parts of Roger Wimmer's online research article,
ranging in tone from professional to personal. Many have expressed curiosity
and a desire for deeper understanding of the subject, some satisfaction
that the articles affirmed what they already believed, and a few skepticism
and even anger that their initiatives would be questioned.
Dr. Wimmer's knowledge in the field of media research
is unmatched by anyone working in Christian radio, but this isn't (or
shouldn't be) an issue of who is right and who is
wrong. This is about understanding objective, scientific truth and ultimately
desiring to do what is best for our stations.
And of all of the formats, those of us in Christian
radio should be the most willing to sift out ego in our quest for doing
what is right, for finding truth.
Following are Roger's responses to some of the new questions
he received during the past week:
> Question 1:
We simply can't afford "real" callout. Isn't online research,
regardless of the lack of control over the situation or how poorly it's
conducted, better than no research at all?
Is bad or poorly-conducted research better than no research
at all? That's ridiculous. What would you say to a doctor who gave you
a bottle of pills and said, "Although we don't know if these pills
are any good, take them anyway because they are better than nothing."
Research is an attempt to discover something. However,
to be valid and reliable, you must know what the "something"
is and where the data came from. In the case of online music research,
the something isn't known for sure.
Decisions made with bad or poorly-conducted research
can only be bad or poor. Case closed.
> Question 2:
From what you've written, our online research isn't really accurate
and the way we're doing our listener panels isn't exactly right. But
when you combine these and look at the big picture, don't things more
or less even out? Even thought the findings are inaccurate by themselves,
don't we have a much better view of what our audience really thinks?
If bad or poorly-conducted research is useless, how
can a combination of inaccuracies be of any value? Contrary to some
opinions, research doesn't simply involve collecting data. Research
involves collecting good data from good respondents.
There is no way that combining inaccurate findings to make decisions
can make the data valid and reliable.
> Question 3:
I know online music research isn't perfect, so wouldn't it be okay if
I just look at it out of curiosity and then don't make any decisions
based on the data?
Sorry, but that's not human nature. It's the same thing
PDs said when Arbitrends (monthlies) were first produced. In
reference to Arbitrends, Arbitron says, "The accuracy
of Arbitron audience estimates cannot be determined to any precise mathematical
value or definition." Even with this warning, programmers now interpret
Arbitrends as "real" numbers.
If you know that online music (or listener panel) research
isn't valid and reliable, then there is no reason to look at the data
for any reason.