Why is it when you buy sponges in a cellophane wrapper they are still moist? What actually are they moistened with? What is the liquid in them? - Nikki
Nikki: I didn’t know the answer to your question, but I found it on the Internet…click here for the answer.
OK, I work nights, so I don’t get to bed until about 3:00 a.m. I end up watching ABC’s “World News Now.” When they are going to a break, they sometimes show the “World Darkness Quotient” and the “National Temperature Index,” which is several hundred degrees. What do these to values mean? Thanks. - Jay
Jay: From everything I can find, the “World Darkness Quotient” and the “National Temperature Index,” are virtually meaningless; they are pieces of information used to attempt to make the show more interesting.
For example, in reference to the National Temperature Index, I found two items:
“The national temperature index expresses temperature departure from the 60-year mean in terms of standard deviations. Each year's value is computed by standardizing the temperature for each of 344 climate divisions in the U.S. by using their 1931-90 mean and standard deviation, then weighting these divisional values by area. These area-weighted values are then normalized over the period of record. Positive values indicate warmer than the mean and negative values indicate cooler than the mean.”
And this from a person named Jan Null...The National Temperature Index is, “simply the sum of the forecast high temperatures for Boston; Casper, Wyo.; Dallas; Denver; Fargo, N.D.; Las Vegas; Miami; New Orleans; Raleigh, N.C.; and Seattle.
As I said, it appears that both numbers are meaningless pieces of information.
Doc: One of my favorite performers of all-time is Bobby Darin, who died at age 37 in 1973 after his second open-heart surgery. I was watching Bobby Darin videos on YouTube and became intrigued by the drummer in Darin's band. The guy is great. Do you have any idea who the drummer was in the band? - Jim
Jim: Bobby Darin was one of my favorites too and I'll include the links you sent for the videos if anyone else would like to see two of Darin's performances: Bobby Darin One Bobby Darin Two
I didn't know the drummer in Darin's band until I searched the Internet. His name is Tommy Amato and you can read hundreds of articles about him in this search.
In one of those endless discussions about evolution, one of my listeners proclaimed that Charles Darwin completely renounced his theory of evolution and natural selection on his deathbed. I've looked everywhere but can't seem to find any sort of information about a deathbed confession. Do you have any insight on this? Is there some other place I can look to find information on this, or is this just one of those "urban legends?" This is one of those mysteries I need to figure out. If you can help at all, I would greatly appreciate it. Thank you. - Mark
Mark: You should know that whenever you talk about evolution that you'll get all sorts of comments. And the deathbed "renouncing" is one of them...click here.
A statement on this site that says: "The Darwin deathbed story is false. And in any case, it is irrelevant. A scientific theory stands or falls according to how well it is supported by the facts, not according to who believes it."
This statement essentially defines the difference between the scientific method of research and the other methods of knowing. Science tests hypotheses. Scientific researchers are objective observers of data—they let the data fall where they may. You can read more about the scientific method of research in my book or check the Internet for thousands of great articles and discussions.
I’m not sure how to ask this question. I’ll do the best I can. I read in your book that research isn’t really used to its maximum; that is, a research study is conducted, quickly analyzed, and then put away on a shelf somewhere. You suggest that most data can be analyzed from a variety of perspectives to get the most out of the research. My question is: Is this also true in radio research? - Anonymous
Anon: I think you explained your question very clearly. No problem.
Yes, I think the same problem exists in all radio research. In virtually all radio research, the study is conducted, some type of presentation is done, and then the study is put away never to be looked at again. The problem is that most clients (radio station people) aren’t interested in further analysis. They want to see the results to the questions they asked and that’s about it. Radio people would be amazed at the amount of information they overlook in a typical research study.
This is true even with music tests. For example, consider the typical auditorium music test. In most auditorium tests, a sample of 75-100 respondents rates hundreds of songs. (This produces a massive amount of information.) The data are summarized and the PD gets a printout or a CD to see how each song performed. These summaries are then used to help decide which songs should be included in the playlist. When that’s done, the study is usually put on the shelf. But the data could be used for much more than simple rankers.
For example, auditorium data could be used to find out:
How each song correlates to every other song in the test. This would help a PD decide which songs are similarly rated by respondents and help to determine rotations or song placement.
The differences in ratings by respondents according to when they listen to the radio most often. For example, some songs score higher with respondents who listen to the radio only in morning drive.
If there are statistical differences in song ratings among the various demographic groups in the test. The usual approach is to simply look at the song ratings for, say, 18-24 and 25-34, but most music test software does not test for statistical significance. The differences are determined only “it seems like” the younger people like the song more than the older people.
The statistical differences between P1s, P2s, and all the other Ps. For example, are there songs that clearly relate to P1s more than P2s? It’s possible to compare the scores between these groups in current music test software, but most, if not all, do not allow a PD to know if there is really a statistically significant difference between these two groups (or any other groups in the test).
If there is a statistical model to allow a PD to predict the score for a song that was not tested in the session.
I don’t want to get carried away here. The point is that I agree with your perception of the situation. There is a great deal of information just waiting to be uncovered in studies sitting on the shelves of PDs and GMs.
Just a quick question. I'm doing some analysis of some radio station data. The data I have are not normally distributed and I want to change that. I remember in a class I took one time that there is some procedure I can use to data analysis change the data. Do you know what that is? – Rod
Rod: The procedure you're talking about is called monotonic transformation. The procedure will not normalize your data, but it will bring in the outliers (the data that are far away from the mean.)
Try this: compute the square root of each score (or number) in your data set. If that doesn't work, then use the logarithm (base 10) of each number. These procedures won't change the distance between your data points, but it will make the data closer to a normal distribution.
Another thing to look at is your outliers. In some cases, you can eliminate scores that are farthest from the mean. Get rid of them if you think the scores aren't logical with the entire data set.
We have thousands of names in our radio station's listener database. Is it OK to use these names for research projects? - Anonymous
Anon: I'm not sure if I answered this before or not, but I don't feel like looking through my files and I'll answer it again. I'm assuming that you have built this database from such things as contest entries, web site registrations, and remotes—as opposed to names from previous research projects.
The fact that you have "thousands of names" in your database does not mean that all of the names are "good." I recently saw the results of a radio station's research project that used its listener database and only about 20% of the names were good.
Whatever you do with your database, make sure that these people pass the screener for your project. Don't automatically assume they are cumers or fans (P1s). Your list probably includes many errors, people who have moved out of the city, people who no longer listen to your radio station, and bogus names submitted by your competitor(s).
I'm sure you know that the best way your competitors can find out what you're doing is to get on your e-mail or fax list. These two technologies have made radio station spying much easier.
Dayparting - CHR
How do stations that daypart handle weekends? Do they treat them like nights? Like days? Like weekdays? Do they lift all daypart restrictions from their libraries on weekends? - Anonymous
Anon: There is no commonality to what CHR radio stations do on the weekends. There are stations that do all of the approaches you suggest. The absolute best way to find out is to ask your listeners because the desire for weekend programming varies from market to market. If you can't afford the research to find out, then at least conduct an informal survey with the listeners who call in to request songs. If you can't do that either, then the most logical approach is to treat weekends the same way you treat weekdays.
Doc: I promise I checked your archive. If you have already answered this, please forgive me and point me in the right direction.
Has there been any research done on dayparting songs, and if so how effective do you think the research was and what did it indicate? I would expect one of two reactions from listeners: “I like that song, but I'm glad they don't play it (at night, in the morning, etc.)” or “Why do I only hear that song (at night, in the morning, etc.)? I wish they would play it other times!”
I could see getting an idea of the mood or style the majority of your audience might like to hear at certain times of the day (although in this increasingly 24-hour world we live in it seems like it would be really hard to determine when people are sleeping, working, playing), but do stations research dayparting of particular songs?
To me, dayparting always said, "this song isn't good enough to be played all the time" or "we're trying to fool the listeners listening at this time that we're not like the station other listeners hear at that other time." What do the numbers (if any) say?
Bonus Question: What, if any, impact does the name of a station have on its success? Will people listen more to a Q92 than The Buzzard, or visa versa? I say if you play the right songs, you can call yourselves "The Vomit" or "Dirty Diaper 98.5" and people may not like your imagery, but they'll listen. Any numbers on this? - Gene
Gene: I can’t remember answering this type of question before, so it wouldn’t be in the Research Doctor Archive. However, I have answered a few thousand questions in the past four years and can’t remember every one of them. Regardless, on to your questions…
When Sir Isaac Newton was asked how he was able to solve so many problems and discover so many things, he reportedly said, “By thinking without ceasing.” Now, I’m not in the intelligence league with Newton, but unbeknownst to you, I have thought for nearly two days about how to answer your questions. While your questions may seem simple to some people, in reality, your questions open an enormous Pandora’s Box of items. So, sit back, my friend, because this is NOT a short answer.
But…before I answer your questions, I’d like to get on the “research soapbox” for a moment. I need to do this because the information is important in order for you to understand my answers. However, before the soapbox starts (that’s two things now before I get to your questions), it’s necessary to understand the concept of Ockham’s Razor—something that you may already know, but I’m including it here for the readers who don’t know anything about it.
Ockham’s Razor is principle developed by William of Ockham (I call him Willy), a 14th century Franciscan monk and philosopher who developed a variety of interesting concepts. In Latin, Willy wrote, "Pluralitas non est ponenda sine necessitate,” or sometimes you’ll see it as, “Entia non sunt multiplicanda praeter necessitatem.” Yea, sure, he might as well have said, “Xytereira uxjikealk non perqiraixdaxxeit.” (Looks the same to me.) Anyway, there are a variety of translations available for Willy’s principle, but the words basically mean, entities must not be multiplied beyond what is necessary. Or, following Willy’s lead, what he really said was,
“The simplest approach is always the best.”
If you check a variety of sources, you’ll find that Ockham’s Razor is also known as the Parsimony Principle, Law of Parsimony, or the Principle of Parsimony.
Know what? There sure are a bunch of ways to refer to Ockham’s Razor and it doesn’t appear that people are following Willy’s advice about simplicity. Why isn’t there only one name for the principle? I don’t know. However, the name variations for Ockham’s Razor are probably due to the fact that Willy is said to come from Ockham or Occam—both are considered correct. (I think I’m going to tell people from now on that I’m from Denvre—Roger of Denvre.)
Why is all this stuff about Willy and the Hand Jive so important? No, wait, that’s an old song. Change that to….Why is all this stuff about Willy and his Razor important? Because it creates the base for my soapbox topic called, “The Need for Simplicity in Research.”
But wait a second. What is research? In our textbook, Mass Media Research: An Introduction, Joe Dominick and I define research as: an attempt to discover something. That’s it. Research isn’t magic, research doesn’t include statements like, “it seems like,” and research isn’t intended to be a scapegoat for a person’s failings, stereotypes, predispositions, or prejudices. Research is simply an attempt to discover something about anything in an objective and quantifiable way.
Now, I am a strong advocate of the Scientific Method of Research—the Scientific Method of Learning. The Scientific Method doesn’t include pseudo-science methods (bad/fake science…Vulpes Fulva leavings such as astrology and horoscopes, Tarot cards, Palm Readings, Ouiji Boards, and other similar baloney like “interpreting” Dung Beetle tracks in the sand. OK, so I made up the last one.
The Scientific Method of Research/Learning is unique from the other Methods of Knowing in a variety of ways that you can read on that link
So what do we have so far?
First, research should be simple. There is no reason to make things more complicated than they need to be. Next, in order to valid and reliable, the research should follow the guidelines of the Scientific Method. OK. Got it? So now what?
Well , the “now what” is…let’s get back to research.
Researchers in the hard sciences like physics and chemistry have an easy life (so to speak) because they deal with exact elements. If one researcher analyzes a piece of metal using universally accepted procedures, and concludes that the metal is iron, another independent research will come to the same conclusion using the same universally accepted research methods. In other words, the items hard scientists investigate are static—they don’t change and it’s easy to describe, understand, and predict things.
But that’s not true in behavioral research—research that involves human subjects or respondents. People are not static. They constantly change beliefs, ideas, reactions, emotions, and everything else. That’s why what is popular (the latest fad) today probably won’t last long. For example, look at what’s happening to the Beanie Baby craze. At one time, the stuffed collectibles were the rage and now it’s difficult to give the things away. Why? Because human beings change. And they change their likes and dislikes just as quickly about radio—music, DJs, spot loads, stop sets, and so on. The format and formatics that are popular today probably won’t last long and radio management must keep up with listener likes and dislikes.
This is where research enters the picture—find out what the listeners want. But it’s not that easy because many people in radio (consultants, PDs, GMs, and non-researchers who are the heads of research companies) don’t understand research. They don’t understand what research is, they don’t understand what research can do, and in many cases, they misuse research and go beyond its limitations. These people are charlatans just like the old snake oil salesmen of the past, and they are a detriment to the radio industry. Many of the “research” methods I see that are sold, pushed, advocated, or supported by these people are pure garbage—pure and simple trash pushed upon an unsuspecting public (radio people). Why? Because radio people, who are skilled in other areas, don’t know much (or anything) about research and they take the word from these “experts.” Marconi would turn over in his grave if he knew about some of the garbage being “pushed” by so-called radio researchers. (OK, done with that.)
Radio research is important because it provides managers with information to help them make better decisions. However, research is not a bible. Instead, research only provides indications of things—indications of likes, dislikes, and so on. Why? Because the listeners change constantly and interpreting research as a “bible” of listener behavior is a gross misuse of the data, but it happens all the time. In summary…
Research should be used to help decision makers make better decisions. Research alone does not make decisions.
Let’s get to your questions…
Dayparting Research. You asked about music tests, which is only one of several radio research methodologies. I’ll answer in reference to music tests, but many of the comments also relate to focus groups, perceptual studies, and every other radio research method.
As you may know, the idea for music tests emerged around 1982 from E. Karl, one of the best radio programming consultants in the business (now retired). Since that time, dozens (maybe hundreds) of studies have been conducted to verify the validity (Does it test what it’s supposed to test?) and reliability (Does it consistently test the same thing?) of the method (auditorium and callout).
In the early years of music testing, we asked people three questions about each song they heard: (1) Are you familiar with the song?; (2) If so, rate the song on the scale provided; and (3) Indicate if you’re tired of hearing (burn) the song. It was a simple methodology following Ockham’s Razor. But things changed quickly.
Soon, researchers, PDs, and consultants wanted to know other information. Things like:
1. Does the song fit the radio station (sponsoring the test)?
2. Is the song a [insert format] song?
3. How many times do you want to hear the song each day or week?
4. What mood are you in when you like to listen to the song?
5. And more.
What started as a simple procedure soon became a very complicated one—Ockham’s Razor was no longer sharp. Respondents were not only telling if they liked or disliked a song, but they were now asked to act as PDs and make programming decisions. Radio listeners are not PDs and have no programming experience and they should not be asked to program a radio station.
But things didn’t stop there. I remember in the late 1980s or early 1990s when a researcher wanted to test music by attaching a Galvanic Skin Response machine to respondents who “rated” the songs by their “reactions” to them, not with a rating scale number. Gag me with a beaker.
Adding all the additional “it would be nice to know” elements to music tests has made the procedure, for those who use such complicated methods, a research joke. Respondents in some cases are asked to provide six or seven pieces of information for each song they hear, and some companies test many hundreds of songs in one session. These additional elements affect the Internal Validity of the test. For example, the amount of time in the testing room affects things like history, maturation, testing, instrumentation, and more.
Why has this happened? Who is to blame? Can’t people see that the results from such multi-variable music tests (asking respondents to answer numerous questions each song) may not be valid or reliable? I guess not.
But taking research beyond indications is not just a characteristic of radio research. It’s present in every area. For example, have you heard that McDonald’s will soon have a video camera at its drive-in windows? The purpose of the camera is to identify the type of car that’s in line. For what reason? Get this…McDonald’s has developed a computer program that attempts to predict what customers will order based on the type of vehicle they drive. (Jeep Grand Cherokee? They want a Big Mac.) The justification is that the predictive software will speed service and help the order takers with their work. Give me a break.
I don’t know what type of statistical methodology McDonald’s is using to predict purchases, but my guess is that it’s a multivariate statistic called Discriminant Analysis, the same statistic the IRS uses to determine if a person’s deductions fall out of the range from other similar taxpayers. When the Discriminant Analysis locates “outlier” taxpayers, they usually receive an audit letter.
I’m not criticizing Discriminant Analysis or any other multivariate (multiple dependent variables) statistic. What I’m questioning is the need for such complexity, which relates to music tests.
In addition to changing frequently, most people have a tough time with testing situations, whether it’s an academic setting or a music test setting. With this inherent difficulty, I don’t understand why some people (researchers, consultants, PDs) want to create such a confusing testing situation. We want to know if the respondents like or dislike songs and if they are tired of hearing them. Everything else is wasted on trying to transform the non-radio people into PDs, Assistant PDs, or Music Directors. As they say in Atlanta, “That don’t be right.”
As I mentioned earlier, the purpose of research is to provide decision makers with good information so they can make better decisions. The purpose of music tests is not to ask respondents to pretend to be PDs for a few hours, but rather to tell researchers, PDs, and consultants what they think about the songs they hear. Which brings us to your question about daypart testing. (Can you guess what I’m going to say?)
There are several radio research companies in the United States and there is no way I can know about each of their music test methodologies. However, I can tell you what I know from the studies I conducted several years ago about daypart testing in auditorium music tests.
What I found out is that is virtually impossible to collect good dayparting data in a music test. The main reason for this is that when respondents are asked if they like to hear a song in the morning, for example, most of the respondents say, “It depends.” It depends on their mood, where they are when they’re listening, what they’re doing at the time, who else is with them, and so on. Your comment about living in a 24-hour world is also relevant—what is “morning” to some listeners is “night” to others.
You can ask about dayparts, but don’t expect to get usable results. It won’t happen because a person’s feeling toward hearing most songs is based on a multivariate situation—there are too many things that may affect a person’s answer—and that’s why most say, “It depends.”
Since the results are so confounding, the PD usually makes the decision anyway. Which brings up another point—What function does a music PD serve? In my opinion, a the job of a PD is not to “live” by the numbers on tables of a research report. The job of a PD is to read the numbers and consider the alternatives to whatever is being investigated. My experience shows that respondents don’t know when they want to hear a song. Sometimes it’s OK in the morning and sometimes it isn’t. So why ask them? The decision about when to play songs belongs to the PD…to understand what the listeners say via research studies and make decisions based on this information (Find out what they want and Give it to them.)
A PDs job isn’t to function as a brainless robot following the every-changing whims of the listeners. That’s goofy. Good PDs read and hear what listeners say and make the best decision for the radio station. Listeners are not PDs and they shouldn’t be treated that way. By the way, do you know that about 5% of all adults in America believe that their radio station buttons in the vehicle they drive are set by the manufacturer and can’t be changed? And we want to ask these people if songs fit a radio station, if the songs are really Country (or other format) songs, or when should the song be played during the day?
No. Ask them if they are familiar with a song, rate it, and if they’re tired of it…and maybe one other item (I’ll give in to one other item, but that’s it).
So the answer to your question about daypart testing is that the question produces confusing data. PDs can’t do much with song results where 75% of the respondents say, “It depends” about whether the song should be played in the morning or other daypart. In other words, follow Ockham’s Razor and keep things simple. By the way, the “KISS Principle” (Keep it Simple Stupid) is based on Willy’s Principle. I find it humorous when some speakers (motivational speakers, etc.) claim authorship to the idea. Give me another break.
The decision to daypart songs does not belong to listeners, the decision belongs to the PD. An analogy may help make this clear. In reality, a music PD is very much like an orchestra conductor. Yes, I’m serious. Consider this…an orchestra conductor starts with only printed information on a piece of paper (the sheet music). With that information, he/she determines how fast the music should be played, how loud it should be played, and so on. I don’t know for sure, but my guess is that conductors don’t ask the audience how the musicians should play the music—the conductor makes that decision based on experience, talent, and an understanding of the music (a “picture” in his/her head about how the music should sound).
A PD also starts with only printed information (research data in most cases), and this information allows the PD to program (conduct) the radio station based on experience, talent, and a “picture” in his/her head about how the radio station should sound. Case closed.
OK, so that the end of question #1. Here we go to your second question about the relevance of radio station names and slogans.
Radio Names and Slogans. I have conducted many studies on this topic, maybe several hundred, I can’t remember. What I have found is that the name or slogan of a radio station doesn’t matter much to listeners as long as it relates in some way to their perception of the radio station. In other words, something like “Eagle 610” probably wouldn’t fit a News/Talk radio station. I say “probably” because I am terrible at predicting what people think and believe. That’s why I always say, “Ask the listeners.”
In your question, you ask if people will listen more to “Q92” than to “The Buzzard,” or visa versa, and you suggest that if a radio station plays the right songs, it can call itself, “The Vomit” or “Dirty Diaper 98.5.” Now I know those are exaggerations, but I understand your point.
From all the research I have done, I can say that I agree with your basic premise that a radio station can call itself almost anything as long as the product is what the listeners want to hear. However, the best thing to do is ask the listeners about which name or slogan is best. It’s very easy to do and the results are very meaningful and useful.
What you do is provide a basic explanation of the radio station to potential or current listeners. You then say something like, “I’d like to read a list of names or slogans this radio station might use to describe itself. Please rate each name using a scale of 1 to 10, where the higher the number, the more you like the name or slogan for the radio station.”
Unlike answering dayparting questions where there are so many variables, respondents can easily understand this approach and they have no problems rating the names or slogans. Where do you get the list of names and slogans to test? One way is from focus groups where respondents describe the music and radio stations they like, and other similar topics. It’s easy to find potential names and slogans from their comments.
A second way is to develop a list on your own, but I don’t like this approach because radio people use terms that “average listeners” never use. For example, a radio person might want to test something like “CHR 98.5,” but listeners don’t use the term CHR, they use “Top 40.” (That’s only an example.)
So, can a radio station use virtually any name or slogan as long as the programming is on target? From what I have seen, I would say “no.” The name or slogan should relate in some way (it doesn’t matter which way it relates as long as the listeners think it’s OK) to the radio station’s programming or the listeners’ image/perception of the radio station.
Addendum. I have no problem with behavioral researchers trying to find out things about people—why they do things, why they think a certain way, what they like and don’t like, and so on. There is nothing wrong with that. What’s wrong relates to how these data are interpreted.
At best, behavioral research provides only indications about something, not facts that can be generalized to the entire group that is studied—such as radio listeners, residents in a Zip Code or county, or Arbitron diary keepers. The problem emerges when people take these general indications and incorrectly assume that all people in the group possess certain qualities or think or behave in the same way. For example, some researchers investigate “life groups” of people, such as a “Country Music Life Group,” and then make statements such as, “All Country music listeners… (fill in whatever quality or characteristic you want)” Hogwash. Pure and simple garbage.
This is a misuse of the information. There is nothing that ALL Country music (or any other format) listeners like or dislike, believe in, or anything else. Sure, there may be some things that some of these people share, but whenever you see “all,” or “everybody,” or other term that indicates universal similarity, you must become suspect of the person touting the message, because people are different.
This is why dayparting research isn’t a good idea because it attempts to compartmentalize listeners into compartments that don’t exist. There are simply too many different types of people who listen to radio at too many different times of the day for too many different reasons, to suggest that research can find a common ground. Research is a great tool for decision makers, but the questions investigated must be testable.
I hope I answered your questions. Off the soapbox and time for a ride on the Bourget.
Days of the Week - Why 7?
Why are there seven days in a week? Why not 8 or something else? - JL
JL: As I say many times in this column…so that I don’t have to reinvent the wheel, I’ll refer you to a website that will answer your question. Just click here: Days of the Week.
Dead People - Tiny Tim
Doc: Do you know if the singer, Tiny Tim, is still alive? - Anonymous
Anon: I wasn’t sure about the “Tiptoe Through the Tulips” dude, but there is a great website that has information about alive and dead people (“I see dead people.”) I now know your answer, but you’ll have to Click Here.
While you’re at the site, you can check on many different celebrities and famous people.
Click Here for Additional D Questions
All Content © 2015 - Wimmer Research All Rights Reserved