Where can I go to find out average salaries in different markets, and for different job titles? I am green in this biz and am getting calls from different markets. The first thing they want to know is how much money I’m looking for. Also, is there an estimate for wage increase when you go from on-air to on-air/APD? The times I have named a price, they act like it’s way lower than they were expecting. I don’t want to work for a place when I know I could’ve made more money there. PS: This is the first time I’ve read you, I really like it and will continue to come back. Keep up the great work. - Gotta Know
Gotta: Didn’t you ever see the Clint Eastwood movie, “Dirty Harry?” I believe the way to state that is, “I gots ta know.” OK, never mind. Welcome to the column and I’m glad you enjoy it. I will try to keep up the work.
The salary question comes up a lot and I usually answer something like, “R & R has a salary survey. Check out their website.” Well, that’s a crummy answer, so I decided to finally get the information. As far as I know, the R & R survey is the most comprehensive survey available.
I contacted Ron Rodrigues, the Editor-In-Chief at R & R and he said the annual salary survey is available for $10.00 by snail mail or $15.00 by fax. To order, call Hurricane at 310-788-1662. The salary survey is printed in R & R once each year in the issue that goes to the NAB Radio Show. Ron said this year’s issue will be dated September 13.
The wage increase for a change from on-air to on-air/APD? That varies greatly by station and market and type of on-air position. My guess based on what I have seen in the past is a range of 20-40%. But don’t hold me to that since there are so many differences.
Salaries - 2
Hi, I think you know everything (or pretend to), so I will ask you a question I can't seem to get an answer to. I recently received my MA in Communications. I have experience in public relations, marketing, and advertising. I also have website designing experience, and can speak intermediate Spanish. I want to pursue a career probably in public relations, advertising, and marketing. How much can I expect to make a year with my degree, and my experiences? Thanks. - Julieanne
Julieanne: Oh, I don’t know everything. I learned a long time ago that the amount of stuff I do know is far less than the amount of stuff I don’t know. However, one key to knowing things is the ability to know where to find answers.
In your case, I called a few friends in the industry. They said what I expected…there is a broad range of starting salaries for the positions you are referring to. There are differences based on company size, location, and responsibilities. However, the range you can expect is somewhere between about $25,000 and $50,000.
Here are a few websites for more information:
Sales or Listeners
What is better—more sales or more listeners? Our CHR radio station (in a medium market) sells more than any other radio station, but we don't have good ratings now because we have too many spots and too many stop sets. Can you please tell me formula to increase ratings without reducing our spot load? - Anonymous
Anon: As you can see, I edited your question significantly. However, I'm not sure if I interpreted your statements correctly, so please let me know if what I wrote is what you asked.
I'm sure there are many radio operators who would love to have your radio station's revenue problem. What I don't understand is how you know that your "low" ratings are due to too many spots and too many stop sets. Is this from a research study?
The formula for increasing ratings is this: find out what people want, give it to them, and tell them that you gave it to them. If the listeners say you have too many spots and do not listen to your radio station often because of that, then your going to have to make a decision—give them what they want or give them what you think they need.
I should add here that listener complaints about heavy spot loads are becoming very common. Radio operators need to consider the consequences of continually adding spots. At some point, listeners will turn away—in fact, research shows that many have already done just that.
Sales - New to Radio
I am 26 years old with 13 years of sales experience. I am brand new to radio. I was hired by [a radio station] last Monday as an Account Executive. My question is what common mistakes do new radio salespeople make and how can I avoid them? I am eager to learn and looking forward to my success. Any help would be appreciated. Thanks -Tim
Tim: I’m sure you noticed that I edited your question to delete the radio station. I don’t think that’s relevant to your question.
Oh, I’m sure there are many mistakes radio sales people can make, but when I sold radio, two mistakes I saw were:
1. No understanding of what advertising is and how it works. Advertising does not sell a product or service. Advertising is only communication.
2. No understanding of the 5 Stages of Communication/Persuasion. You can’t sell without this knowledge, and you can’t convince an advertiser to buy radio unless he/she understands the process.
Same Things We Do Every Day
I noticed that I do a lot of the same things every day. I looked around the Internet and found that most people do many things every day and some are odd, or at least out of the ordinary. So I thought of a question: What do you think is the goofiest thing that many people do every day that is a bit odd? - Anonymous
Anon: First, your question is very broad. Second, I'm not sure exactly what you mean by "goofiest." But I'll take a shot…
I'm not sure if this qualifies as "goofy," but after asking a small sample of people what kinds of things they do every day, I came up with one odd thing. Referring to people who take a shower or bath every day (everyone in the sample), there is a tendency for these people to say that they dry off with a towel in the same way every day—their pattern of drying themselves off is always the same. And, no, I don't know why.
Is that goofy enough?
As a general rule of thumb, is it true that the larger your sample is (critical mass) the smaller the potential margin of error is?
I assume that might not hold true for a station that does Internet research, if the sample was not screened properly. Basically, I just arrived at a station that's done a ton of those “ratethemusic.com” surveys. - Anonymous
Anon: Yes, you are correct in saying “as a general rule.” If you’d like to see the estimated error associated with various sample sizes, go to my business website (Wimmer Research) and click on the “Sampling Error Calculators” at the bottom of the box.
OK, with that said, then I need to say this…Sampling error relates to good samples. In other words, you can have a bad sample of 10 or a bad sample of 1,000,000. Just because a sample size is large does not make it correct, nor can you automatically assume that the calculated estimated error is correct. If you have the wrong people in your sample (of 10 or 1,000,000), then your sampling error could be 100%.
The large sampling approach (get the sample size as high as possible without regard to quality) is known as the “Law of Large Numbers.” People who push, sell, or use, large samples, without regard to quality, to “prove” the data are correct use this “law.” (“We have a sample of 10,000. The data must be good!” Hogwash.)
Just because a sample is large does not mean you can assume that the sampling error is low. It could be just the opposite.
Case closed. This is not debatable.
Sample Size — Does Sample Size Really Matter?
On a forum that I subscribe to, that ever-interesting issue of radio research reared her beautiful head again. This time someone wrote, "Sample SIZE means nearly nothing, ditto, zilch, etc." The person was trying to make a point about the importance of filtering one's research. I thought it was ironic that he linked to your column as a research reference. Having sat under several of your seminars and read numerous columns of yours over the years, for some reason this "truth" has eluded my perusal. Without getting too deep, would you mind addressing the importance of sample size in any legitimate scientific inquiry? Respectfully yours, - Communicaster
Communicaster: I wish I knew which information the person was referring to because I can't imagine that I ever said anything remotely close to that. However, without his specific reference, I'll say this . . . There are two important items in reference to sampling in any scientific research—Sample size and sample quality.
The sample size used in any scientific research project depends on a few things, including, but not limited to:
1. Sampling Error. Sampling error is the degree to which measurements obtained from a sample differ from the measurements that would be obtained from the population (Wimmer & Dominick, 2006). In scientific research, researchers select a sample size that has a sampling error that they (or the users) are willing to accept. In media research, a sampling error of about ±5% is usually acceptable (associated with a sample of 400). This means, for example, that if the results of a study indicate that 50% of the respondents like a radio station, the "real" number is somewhere between 45% and 55%. That's OK with media studies where there are no "life and death" decisions made. But a 5% sampling error would never be used in areas such as medical research. Medical researchers would not be satisfied with saying that there is about a 5% probability that their results (such as a cure for a disease) are due to chance. Medical researchers use sampling errors of ±.001 (1 out of 100 that the results are due to chance), or ±.0001 (1 out of 1,000 that the results are due to chance), or even ±.00001 (1 out of 10,000 that the results are due to chance).
But . . . in most media research situations, the researchers and users of the information do not use the total sample to make decisions. In most situations, only a few cells are used, such as Females 18-34, or Males 25-44. In these cases, the sample size is much lower, maybe 50 respondents or so, but that still isn't a major problem in media research since sampling error at this sample size is about ±14%. In addition, if good screeners are used (which reduces the variance, or differences, among the respondents), sampling error is actually less than the computed ±14%. (That one advantage of using screener or filter questions.)
If you want to see the amount of sampling error associated with different sample sizes, go to my business website and click on the "95% Level" in the box labeled, Sampling. You'll see immediately how sample size affects sampling error.
2. Time. The number of respondents to include in a research project often depends on the amount of time available to collect the information. If a quick decision must be made, a researcher may use a small sample (such as 100) to gather preliminary indications so the decision-makers have at least something to use to help them with their decisions. There is nothing wrong with using a small sample as long as the people using the data understand that sampling error is higher and it must be taken into consideration when decisions are made.
3. Types of Decisions. If the purpose of a research study is only to collect preliminary indications of what may or may not exist, then a small sample is usually adequate. However, if the users of the information want to have reliable information to use to make major decisions of some sort, then it's absolutely necessary to use a sample size with an acceptable amount of sampling error—probably no less than 400 respondents.
4. Cost. In some situations, small samples are used because of budget constraints. In these cases, it is vitally important to stress the amount of sampling error associated with the sample before the data are used to make decisions. The decision makers need to know the range of responses provided by the respondents.
Those are some of the criteria used to select a sample size. As you can see, the size of the sample is exceedingly important because it relates directly to sampling error and how the data may be interpreted—as sample size increases, sampling error decreases, which means that the reliability of the data increases. If the person on your forum says that sample size means "nearly nothing, ditto, zilch, etc.", the person is actually saying that sampling error in research means "nearly nothing, ditto, zilch, etc." And that, my friend, is ludicrous. Or, as some people say in Atlanta, "That don't be right."
You mentioned that the person on your forum said, "The person was trying to make a point about the importance of filtering one's research." Although stated in a rather odd way, this comment refers to the screening used in research—the screener questions used to "screen in (or out)" specific types of people for the research study.
(Side note: Screener questions are the heart of quality research because the questions allow researchers to select only the respondents who are relevant to the research study. In other words, if we conduct a study for a Soft AC radio station, we probably wouldn't want to include teenagers in the study, so they are eliminated from consideration via screener questions.
Because screener question force a selection of only a certain type of respondent, they reduce the variance (differences or variability) in the sample. That is, the screener questions "weed out" respondents who are irrelevant to a study. For example, if we were conducting a study for a Country music radio station, it wouldn't make sense to include respondents who hate Country music. If we did include these people, the total sample responses would cover a wide array of comments, and the responses probably wouldn't be of much value. However, if we exclude those who dislike Country music, the variance in the sample is reduced and the results will be more meaningful. End of side note.)
With the information I have about what the person said on the forum, it seems as though he is saying that "quality" is more important than "quantity." I would beg to differ. I say that in reference to sampling in scientific research, quality and quantity are equally important. And here's why . . .
Let's say I wanted to conduct a research study for a radio station and use a sample of only 10 people—all who are perfect respondents in every way according to the desired respondents selected via screener questions. However, with a sample size of 10 respondents, the sampling error would be about ±31%. If 50% of respondents (5 of the 10) agreed that your radio station is great, the "real" number is actually somewhere between 19% and 81%—basically no better than guessing.
See what I mean? When it comes to sampling in scientific research, the terms "sample quality" and "sample size" are inseparable. They are inseparable conjoined twins—one cannot exist without the other. Quality is just as important as quantity. They are the Ying and Yang (or Yin/Yang) of basic research tenets. You can't have a good sample based only on size and you can't have a good sample based only on quality. Did I say that in enough ways to get the point across? In case I didn't, here is more . . .
A large sample does not inherently guarantee that the respondents are good. One goal of scientific research is to generalize the results of a study using a sample to the population from which the sample was drawn. In order to do this, and have some confidence that the numbers are valid and reliable, the sample size should be large (defined by the type of study conducted) and contain the correct respondents as defined by the screener questions used.
To repeat . . . A large sample alone does not guarantee valid and reliable results. A sample of 400 people could include 400 oddballs (known as outliers) who don't relate in any way to the population from which they were selected. Some companies tout that their sample of "thousands" is good because it's large (known as the Law of Large Numbers in statistics/research). This is hogwash. It's also hogwash to say that sample size doesn't matter because, as I just explained, you can have 10 "perfect" respondents, but the sampling error is so large that it doesn't make sense to conduct a study. In both of these cases—a large sample with incorrect respondents or a small sample with huge sampling error—the research is a waste of time. It would be better to donate the money for the study to the Red Cross or some other charity.
Finally, you said that the forum references me for his comments that sample size doesn't matter. I'd like to know what that person is referring to because I never said anything like that and never will. If the comment is somewhere on my Research Doctor Archive, then it's a mistype.
Sample Size for Perceptual Studies
I see that different research companies use different sample sizes for perceptual studies, such as 400, 500, or 600. What is the right sample size to use for a perceptual study? - TJ
TJ: Good question. I’m asked this quite often. Let’s see if I can clear this up, but it will take two discussions. You’ll see what I mean as you read on. However, I first need to say that there is no such thing as the right sample size for a perceptual study. The sample size depends on a variety of considerations.
If you know anything about research, you know that there are many tenets (rules) to follow to ensure that the design, sampling, methodology, data analysis, and interpretation are correct, or as correct as they can be. These tenets are important so that the study and the data are as reliable and valid as they can be. (People who conduct research who don’t have adequate training cause a huge number of problems. Many people consider that research is easy and “anyone” can conduct a research project, including music tests. Nothing can be further from the truth and that’s why there is so much garbage research floating around, and so much garbage research in the trades and on websites.)
Discussion One – The Theoretical Approach
If you learn the methods of scientific research, you’ll find that there are many things to consider to determine the appropriate sample size for any research project. When it comes to perceptual research (telephone studies), four items are usually considered:
The desired confidence level, such as 95% or 99%—you are 95% or 99% sure that your results will fall within a certain range, or margin of error (see the next point).
The desired confidence interval or margin of error, such as 4% or 5%.
How the data will be analyzed in reference to subsamples, such as Males 18-24 from a total sample of Adults 18-54. Sampling error estimates are usually based on the Total sample and they don’t relate to subsamples. For example, if you use a sample of 400 and look only at the Total responses (all 400 respondents), your sampling error at the 95% confidence level is ±4.9%—a response of 50% in the study actually falls somewhere between 45.1% and 54.9%. However, let’s assume that you look at the data for Males 18-24. Assume there are 75 in the study (of the 400). At the 95% confidence level, a sample of 75 has a sampling error of ±11.3%. That may or may not be acceptable for the study. If it isn’t, then you’ll need more than 400 respondents.
Cost. The major concern here is if the money is spent well (cost vs. value). A tough screener (that is, low incidence) may be too expensive to conduct and another approach may be required.
Those are the theoretical considerations, usually followed in academic research. However, in reality, these items are not usually considered in private sector research. That creates the need for a second discussion.
Discussion Two – Reality
In most private sector research (including media research), the decision about sample size comes down to two items.
Cost. How much money are you (or the client) willing to spend?
What the researcher suggests (as long as it doesn’t cost too much).
Actually, these two points relate to the same thing—cost.
What about confidence levels and confidence intervals? They are rarely, if ever, considered in media studies conducted in private sector (non-academic) research.
Don’t believe me? How many radio people have you heard mention confidence levels and confidence intervals when discussing Arbitron ratings or perceptual study results? My guess is none. Is this wrong for these people to dismiss these items? Well, it’s not really wrong, but it may lead to misinterpretations of the data.
For example, most radio people read Arbitron numbers and other research data as real numbers, but they aren’t real. They are estimates that include sampling error. For example, radio Station A may have a 5.2 share and Station B may have a 4.8 share. How are the numbers interpreted? Station A is considered to be Number 1—the “leader,” “the winner,” or “the best.” In reality, however, if sampling error is considered (it should be), Station B may be Number 1, or the stations could be tied. The numbers aren’t real, they are only estimates that include limits (upper and lower).
So what sample size should you use in a perceptual study. With all things considered, a sample of 400 should be adequate in most cases (I said most.) The maximum estimated sampling error with a sample of 400 is ±4.9%, and that’s not too bad when it comes to interpreting results in a behavioral study like those conducted in radio.
By the way, I have a sample size calculator on my business website. Click here to try it: Sample Size Calculator. When you get there, click on the “Sample Size” option in the “Sampling” box.
Click Here for Additional S Questions
Roger D. Wimmer, Ph.D. - All Content ©2018 - Wimmer Research All Rights Reserved