Do you know of a website or sites where I can get information about unique vacation packages? I've read about things like NASCAR training camp, baseball fantasy camp, flying planes in dogfights, spy vacations, etc. But, canít find that information now. Any ideas? - Anonymous
Anon: Go to the Internet and search for things like:
nascar training camp vacation
baseball fantasy camp
I think youíll find all the suggestions you need.
Valid and Reliable -1
What do the terms "valid" and "reliable" mean? Ė Linda
Linda: These terms cause a lot of confusion for many people, but they are actually very simple concepts. In reference to research, the term valid means "does the testing or measurement instrument actually test or measure what it is supposed to test or measure?" For example, in music tests, research companies ask people to listen to hooks and rate them on a scale of some sort. Do these scales actually test/measure how the respondents feel about the songs? If so, then the scales are valid.
The term reliable means "does the testing or measurement instrument produce the same results when repeated?" Letís use the music test example again. A reliable music testing or measurement scale means that it continually provides the same results (listenersí perceptions of music).
Generally speaking, something that is valid is usually reliable, but something that is reliable isnít necessarily valid. What? OK, hereís an example. Letís say that you have a thermometer that you dropped on time and the mercury got messed up always shows the temperature as 5 degrees colder than it actually is. It reliable (consistent), but it isnít valid. Or, say that you have a clock that you set 10 minutes fast. Itís reliably invalid. Get it?
Now consider Arbitron ratings for a moment. Many people complain that Arbitron procedures (and numbers) are wrong, particularly when their station goes in the tank. I canít say that I know every single element of Arbitronís methodology, but for the most part, they follow the rules of scientific research. What I usually tell people who complain about Arbitron is: "You may think they are wrong, but at least you have to admit that consistently wrong." (That is, the data are reliably invalid.) By the way, I donít agree with that.
Validity - 2
I have a problem accepting most research dealing with music and radio. First, sample size is usually microscopic vs. population. The amount of "data" extrapolated is enormous and I don't see how one can reduce something as subjective as music and radio choices to objective number crunching. So I suppose I'm asking you to justify your entire profession in laymanís terms. Thanks for your insight. - Tim
Tim: Get a 6-pack of your favorite drink. Youíll need it for this answer.
Your made me smile because you reminded me of two things: (1) the countless number of times I have heard these same questions from the thousands of students I have had in my classes along with the thousands of people I have encountered at conventions, presentations, and seminars; and, (2) the response by the Australian sheriff when he was asked what he thought about Neil Armstrong taking the first step on the moon. He said something like, "I donít believe it. I have been looking up at the moon for a long time and I canít see anyone up there."
Iíll address your statements in order . . .
Comment 1: The fact that you have a problem accepting most research essentially tells me that your knowledge of research is limited. There is nothing wrong with that. However, your approach could also be used by someone who might say, "I have a problem accepting that one person can program a radio station." In both cases, the person with the "problem" needs more information.
Comment 2: The first problem you mention refers to sample sizesóthat they are microscopic. My problem here is that I donít know what you mean by "microscopic." I will grant you that there are instances when research uses a sample that is too small. This often happens when someone who doesnít understand research is in charge of the project. In these cases, I would agree with you that the research probably isnít very good because sampling error essentially rules out any logical interpretation. However, most of the research I know about (mine and studies conducted by other researchers) uses good sample sizes with acceptable amounts of sampling error. Therefore, I disagree with your comment that sample sizes are microscopic.
Now, if your complaint is merely that a sample is used instead of a census, then the only thing I can say is that we live in a world of small sample statistics and youíre going to have to get used to it. Small sample statistics have been shown to be valid and reliable in all types of physical and behavioral research for more than 75 years.
Does that mean that all research studies are conducted properly? No, nothing is perfect. But your problem of not being able to accept research solely because samples are used is not a valid criticism in my book. There is too much evidence in both physical and behavioral research to demonstrate that small sample statistical procedures are both valid and reliable. Sorry.
Comment 3: You say that, "The amount of Ďdataí extrapolated is enormous and I don't see how one can reduce something as subjective as music and radio choices to objective number crunching." You actually answer part of your question yourself. Iíll get to that in a minute.
It is true that the amount of data "extrapolated" in a typical research study is enormous. Thatís one of the goals of research. I donít think it would be wise to spend $30,000 on a project that answers only one question. However, thatís not your main point as I read your question. Your main point relates to your "subjective" and "objective" comments.
First, let me say this: Anything can be studied (researched, investigated) if the "anything" can be operationally defined. If I ask people to tell me if they like a song, how would they answer? Would they say, "yes," "I like it a lot," or "Itís cool"? Those responses mean nothing because I have no metric assigned to the words. Now, if I ask the same people to rate a song on a scale of 1 to 7, where "1" means "I hate it," "7" means "I love it," and 2 through 6 are in between, I now have an operationally defined metric system for music ratingóand I can analyze that. In other words, anything can be quantified if there is a valid and reliable operational definition for the item under investigation. (I can measure how many angels can stand on the head of a pin if you can provide an operational definition of an angel.)
You are correct in saying that the music scores or perceptions of radio stations are subjective. That is the goal of researchóto collect individual opinions (subjective answers) about music, radio stations, or anything else. Thatís why we donít ask people to rate songs or radio stations on the basis of what they think their brother, sister, husband, wife, or friend might say. We want to collect subjective opinions and if the item we are measuring has a valid and reliable operational definition, then we have accomplished a goal of researchówe have quantified the respondentsí subjective opinions.
As to your comment about "objective number crunching"óyou answered your own question. I would hope that all researchers objectively crunch the numbers they collect. If they didnít, then weíre in a lot of trouble. All professional researchers analyze data from an objective perspectiveóanything else would be ethically reprehensible.
Comment 4: You ask me to justify my own profession. Quite frankly, I donít need to since I already know that good research is a valuable tool in fact-finding, decision making, and descriptive analysis. However, that probably wonít satisfy you. The justification for the research profession (in any field) is abundant. In the radio business, the most preeminent justification comes from radio listeners themselves. Consider this . . .
My philosophy in operating any business is very simple: Find out what the listeners/customers want, give it to them, and then tell them that you gave it to them. In all correctly designed radio research studies, listeners are asked what they want to hear (usually on their favorite radio station). PDs and others at the radio station take this information and creatively give it to the listeners. The listeners are then told via internal and external advertising and promotion that they have what they asked for.
The justification? That comes from the listeners when they are asked in another research study what they like about their favorite radio station. Do you know what they say? They name the things that the PD gave them that came from the list of things they said they wanted in the first study. There is your justification and I have seen it repeated thousands of times in all types of businesses.
Your comments and concerns about research are legitimate and very common. However, there is an overwhelming amount of evidence from both the physical and behavioral research fields to document that research is valid and reliable. Take a few research classes. Read a few research books. Review the results of several research studies.
Listen to me now and believe me later because I have nothing to gain by lying to you: The radio research conducted by professional researchers provides data that are both valid and reliable. This information is the foundation for you to become even more successful at your job. The professionals arenít trying to pull the wool over your eyes. They are providing you with a summary of what your listenersóyour real bossesósay about your radio station.
If Iím wrong, Iíll buy you dinner at the next radio convention. If Iím right, then you buy me an iced tea. (Save 75 cents, because youíll need it.)
Validity - 3
Would you explain what "validity" means in reference to research? Ė Trig
Trig: I believe I have answered addressed this question before, but maybe I wasnít clear enough. Iíll try again.
While there are some unique characteristics about the word "validity" in research, the word essentially means the same thing regardless of what youíre talking about. Consider it this wayóinstead of using the words "valid" or "validity," you could just use "real." Thatís what valid means in any situation. Is my contract real (valid)? I just stepped on the scale in your bathroomóare those numbers real? Or how about, "Are you for real?" Iím sure you get the idea.
In research, validity means, "Does the test (or measurement instrument) actually (really) test or measure what it is supposed to test or measure?" For example, letís say that you conduct a music test with a 7-point scale. Someone could ask you if the scale is validóthat is, does it really measure respondentsí perceptions of songs? (It does.)
Valid is different from reliable. Reliability refers to whether a test or measurement consistently measures the same thing. However, keep in mind that a test or measurement can be reliable, but invalid. Itís possible to consistently (reliably) get the "wrong" answers/results.
OK, back to validity in research. There are two types of research validity: Internal validity and External validity. Just as the term says, internal validity refers to the specific details of the test or research you conduct. Because of limited space, I canít get into a detailed discussion about all of the specific items that affect internal validity. Joe Dominick and I discuss these in our book, or you can read another research book to get more details. However, the elements that affect internal validity all relate to the same question: Are the study procedures/methods REAL?
Think of internal validity this way: Youíre late for work. Your boss (if you have one) says, "Where have you been?" You say, "IÖ.uhÖhmmÖ..had to stop for a flock of geese crossing the highway." What do you think your boss means if he/she then says, "Oh, reeeallly?"
The second type of validity, external validity, refers to the generalizability of the results from your study. The usual goal in research is to test a sample of respondents selected from the population and then generalize the results (apply them to) the population from which the sample was selected. You may conduct a music test with 75 respondents and then generalize these results to all of your listeners. (If your music test screener is goofyóthatís a scientific termóand the wrong respondents rate your songs, your results will not be externally valid and cannot be generalized to your listeners.) Another way to understand external validity is this: Is your test or measurement real beyond the sample you use?
Remember the "goose excuse?" If it worked with your co-workers, but failed with your boss, the excuse wasnít externally valid. Donít ask me where that came from because I donít know.
Validity - 4
1. I am not convinced by your extensive defense for Tim. You are neglecting creativity that hasn't been invented yet. The audience can only talk about the known or an extension thereof--they are not the creators of music, drama, etc. Before ĎOklahoma,í the public didn't know to ask for this type of musical: ĎNo girls, no legs, no chanceí your equivalent of that day said. I can only imagine how the concept of writing a song about a state (Oklahoma) would have tested. I'm equally sure no one was clamoring for ĎMacbethí before it was produced, no matter how large the sample was.
2. I don't expect you in a million years to agree to this. I think youíre terrific in your work, but this would be too much to expect.
3. I know of too many programs that never were aired, which were far superior to anything of their type that was aired, to believe that any amount of testing could replace a skilled individual, like, say, a Hal Prince. What do you think a focus group would have said about presenting ĎCabaretí to the American public? Although I'm sure you do honest research, I have personally been exposed to hundreds of focus groups that were manipulated in one way or the other, or otherwise certain viewpoints hidden from the powers that be.
4. Although your methods may seem to produce many benefits, certainly a crutch for people who are only in the entertainment business for the financial attractiveness of it and who otherwise don't really possess the non-financial skills required, but bottom line your practice stifles truly creative people at many levels. And allows for much mediocrity and sameness.
5. This kind of thing doesn't change until conditions get really bad--like no one attends. But I know--and, more importantly, many people much more intelligent than me know that the king really doesn't have any clothes. Look, I know you won't agree, and I know I'm not going to do away with research methods of programming, but I also know that I could not have stopped the medical profession from bleeding people when that was all the rage." - Michael
Michael: I numbered your paragraphs to make it easier to answer your points.
1. Iím not offended that you disagree with meóI have two sons, so Iím always wrong. But I donít believe I neglect creativity, especially creativity that hasnít been invented yet. I do agree that you canít literally test something that isnít created yet, but there are ways to test new ideas. The test procedure has to be correct.
For example, I agree that you probably canít test a radio format that isnít invented yet, but you can test a prototype of the format. You canít ask people to rate a verbal description of "1960s Chicago Oldies," but I do know that you can test a one or two hour prototype program that plays the music.
I agree with you that people are not the creators of things such as Oklahoma and Macbeth and that they would not be able to "test" the idea of a song about the state of Oklahoma or a play about a Scottish king. But no legitimate researcher would test the "idea" before it existsóthe researcher would test samples of the music, or a prototype of the musical.
You also say that, "I'm equally sure no one was clamoring for ĎMacbethí before it was produced, no matter how large the sample was." I canít answer that since I wasnít around during Shakespeareís time, but I do understand your point. It is the same as saying that there probably werenít a lot of people clamoring for a movie about a weird little alien named ET. And Iím fairly sure (from an interview I saw of him) that Steven Spielberg didnít conduct a research project to find out if the ET story and character were good ideas. The idea was there, the movie was made, and everyone involved hoped for the best. (Although Spielberg did mention that he tested different versions of the ET character.)
Testing prototypes of "non-existent" material is a valid research approach. You are able to collect valid and reliable indications of the ideaís potential. However, I have never advocated that behavioral research is always 100% accurate. It canít be because it involves human beings who constantly change. A research study of a prototype may find that the idea sucks. However, the person (or people) in charge may decide to go ahead with the idea anyway and may find that the new idea is a smash hit.
Does that say that all research is bad? No. Does it say that everything has to be researched? No. It says that research should be used as a guide, not a bible. Research should never be used as the ruleóresearch provides indications of what may or may not exist within a margin of errorówhich is the reason why professional researchers always refer to the error involved in any study. If research was perfect (along with the people who develop and use research), every product and service we have would be perfect. It doesnít work that way.
Research that is correctly designed and used never stifles creativity. In fact, if used correctly, research should add to the creative process. For example, every time my co-author and I write a new edition of our book, we ask for comments from the people who use it. We take these comments (research) and use them in our rewrite. The process doesnít stifle our creativity in reference to how we present the material. Instead, it provides us with information on how to make the book betteróit is still our choice on how to present the material.
Similarly, if you asked me to test your idea for a new radio format called "Marcel Marceauís Greatest Hits," I would turn the project down unless you had a prototype. Listeners couldnít rate the idea, but they can rate a prototype. Do you see the difference? We wouldnít test your non-existent idea. We would test a prototype. Then again, you might just decide to put the format on the air without any research whatsoever. Thatís fine. However, Arbitron would eventually "test" you. Your ratings will show if the people agreed with your new idea.
2. You donít expect me to agree with you? I believe I do. Itís a matter of semantics. Thanks for the comment about my work.
3. I canít answer your comment that "I know of too many programs that never were aired, which were far superior to anything of their type that was aired . . ." because I donít know the programs that you are referring to. In addition, your evaluation that the programs were "superior" is your opinion. I would rather see data from a larger sample to know if the programs were superior. You may be right, I donít know. I can only compare your response to mine (another sample of one) . . . I see and hear many shows on TV and radio that I consider to be absolutely awful, but "they"óthe audienceólikes them. Thatís why I never have opinions about what is good or bad. I always say, "Ask them (the audience or consumers)." I learned long ago that I am a terrible predictor of the mass audience when I use my of scale of "good" and "bad."
I canít answer what a focus group would have said about presenting ĎCabaretí to the American public. I never saw one on the topic and I would never attempt to predict. However, I would assume that the respondents in the focus group would have seen the play before they discussed it in the group. It would be bad research to simply ask the respondentsí opinions about a musical that involves a female nightclub entertainer in World War II.
Finally, you say that you have " . . . personally been exposed to hundreds of focus groups that were manipulated in one way or the other, or otherwise certain viewpoints hidden from the powers that be." The only thing I can say here is that you saw a bunch of crummy focus groups led by a moderator who didnít know what he/she was doing, or a moderator who was manipulated by the client. The focus group methodology, if conducted correctly, is a great research tool. Get another moderator.
4. You say that research "stifles truly creative people and allows for much mediocrity and sameness." I agree that bad research and bad researchers do these things, but not good research and good researchers. I have been involved in too many situations where research has helped the creative process. I will not change my opinion here.
5. Iím not sure what you mean by "I wonít agree." I agree that you have different viewpoints and experiences.
However, I do disagree with the analogy of research to bloodletting (now often called "phlebotomy"). One of the tenets of scientific research is that it is self-correcting. A real science or real scientific researcher is always willing to accept a new idea, method, procedure, or practice if it is proved that an old one is incorrect. Thatís why bloodletting, for the most part, disappeared in the 19th century. (The process is still used in some areas, including bloodletting with leeches.)
The tenet of self-correction also exists in (legitimate) behavioral research. Thatís why so many radio research methodologies have changed over the years. I canít think of one research methodology that hasnít changed since I started conducting research in the early 1970s.
The programming research conducted for you should help you make decisions. This process eliminates your need to make gut decisions about everything. If the research you are using isnít conducted in this manner, then you have a problem. If you arenít using the information as a guide, then you have a second problem.
Let me provide another example using music tests that (hopefully) includes the things you addressed. (Iím assuming the methodology is correct.) The results of the music test show you how listeners rate the songs you play for them. Thatís all the test does, although you might include correlations of some kind to see which songs test similarly. The test does not tell you anything about the song rotations or how the songs should be mixed together. That stuff can be tested only after you decide how the songs should be rotated and mixed (using your knowledge, creativity, and gut feelings). You put your ideas on the air and see what happens. Your listeners will tell you if you made the right decisions.
Finally, many people rightly criticize research. However, what they are criticizing is bad research conducted by bad researchers. If you think that all of the researchers listed in whatever directories you look at are equally qualified, youíre wrong. There are several radio researchers who have absolutely no background in research. Yet, radio stations hire these people and then complain about the quality of the product. I donít get it.
Click Here for Additional V Questions
Roger D. Wimmer, Ph.D. - All Content ©2018 - Wimmer Research All Rights Reserved