Arbitron Questions

 

Advertising and Ratings

I need your help to convince my GM to give me a budget for advertising and promotion.  Our competitor hammered us in the last Arbitron because they had a huge contest they advertised for weeks on TV.  I want to do the same thing.  What can I tell my GM to convince him that we need to spend some money to promote our station? - Anonymous

 

Anon:  You’ll notice that I edited your question to eliminate radio station call letters.  I don’t think that’s necessary here.

 

First, unless you have a properly conducted scientific study to back up your statements, my guess is that you are only assuming that your competitor’s contest and/or TV ads caused the “hammering” you received in Arbitron.  This may or may not be correct.

 

The Arbitron numbers a radio station receives are a function of many variables, and I wrote about this a while ago.  If you would like to read the article, click here Arbitron Ratings.  If you read the article, you’ll see a list of some of the variables that may “cause” a radio station’s Arbitron numbers.  In order for you to say that your competitor’s contest and TV ads caused the increase in Arbitron, you would have to eliminate the influence of all other variables that may have had an affect.

 

Your hypothesis that the contest and TV ads cause your competitor’s numbers may be true.  I don’t know, and I’m guessing that you don’t either.  In order to make this statement, you need to eliminate all other plausible rival hypotheses (as is said in research terms).

 

If and only if you eliminate all other plausible rival hypotheses, then you can say that the contest and TV ads caused the higher Arbitron numbers.  If not, you’re just guessing.

 

So what do you do?  If I were in your shoes, I would go to your GM and say something like, “While I can’t prove with any scientific certainty that our competitor’s increases in the last Arbitron were caused by their contest and TV ads, all indications lead in that direction.  I have checked as many other influences that I can, and my analysis shows that all other variables were constant (such as…no format changes, no talent changes, no changes on our radio station, etc.).  It is a given that we must communicate with our listeners.  The information I have collected indicates that our competitor beat us with effective communication, not better programming.  Our business is communications and we have an excellent radio station.  It doesn’t make sense that we lose because of poor communications.”


AQH Per Person Rating

I was told by someone that there is something called an AQH per person rating that every radio station receives whether they are in a rated market or not. It’s not an AQH share, but an actual per person rating. If you’re in a small market like a Page, AZ you would then show up in the Phoenix metro book. – Anonymous


Anon: To answer this question, I went to Dr. Ed Cohen, Director of Domestic Radio Research for Arbitron. He said:


"The use of the word "per" in the question throws me, but problems with the nomenclature aren’t surprising.


First, small stations do show up in Maximi$er, especially when you pull out the individual counties. When you pull a Max run for a market and ask for "all stations," you’ll get just about anyone (commercial and noncommercial) that ever had a bit of cume in the market. A Max client can look at the county that includes Page, AZ and see what stations show up there (also the user could combine surveys to get a more reasonable sample size; Max needs a minimum of 30).


. . . the local station in Page will show up in that county [County Coverage Report]—assuming they have listeners that have written it down in the diary. It’s an annual report that is run for every county in the country and sold mostly to small market stations.


In Max, you would get an AQH rating, share, persons number, and cume rating and persons. County Coverage has the same except there is no persons numbers for AQH. [Y]ou would see cume persons and rating, as well as AQH rating and share."


Ed asked me to make a note that the station does need to be an Arbitron client to use the data.


Arbitrends

Hey, doc, 'splain something to me. My GM tells me ‘It's only a TREND." Okay, so . . . (1) What are trends good for? and, (2) Feb-March-Apr trend means what? Trend for April? Trend for all three months? This Asst. PD wants to know. Thanks. - Anonymous


Anon: Inquiring minds want to know . . . Here are your inquiring answers:

  1. Trends are preliminary numbers to feed your paranoia about how your radio station is doing.

  2. The 3-month trend is a rolling average of the past three months of paranoia numbers.

  3. Trends were started because radio people wanted them.

  4. Trends make money for the company that produces them.

By the way, read the last page of your trends. It says:


"The estimates provided by Arbitrends are derived from the diaries that provide the data in the Local Market Reports and are subject to the limitations stated in those reports. Due to these limitations, inherent in Arbitron’s methodology, the accuracy of Arbitron audience estimates cannot be determined to any precise mathematical value or definition. Arbitrends is not part of The Arbitron Company’s regular syndicated service and is not accredited by the Media Rating Council (MRC)."


Arbitrends - 2

I apologize for my lack of research basis in this question, but you are a great teacher for all of us PDs who would rather pick up VD than have you teach us anything...so anyway...

 

I am curious about the phases in Arbitron ratings, how do they work?  I find it peculiar that a station in the first phase can jump from, say, a 3.0 to a 3.5, then in the second phase jump to a 4.3, but when the final phase comes out, the station gets a 3.3 for the book.  I see this happening a lot—where the book numbers don’t follow the trends at all.  Why is this? - Anonymous

 

Anon:  Hey, don’t apologize for not understanding something.  The way to learn is to ask questions.  Pick up VD?  I’ll have to admit that I haven’t heard that explanation before.  I think I’ll pass on a comment so I don’t get into trouble with other readers.

 

I have answered questions about Arbitrends in the past, but I thought it would be best to send your question to Bob Michaels, Vice President for Radio Programming Services at Arbitron.  Bob, and many other folks at Arbitron, have always been very gracious in helping me answer questions, and I want to thank Bob for helping me again.

 

Bob said:

 

“Each Arbitron Phase in the Arbitrends service is a three month average.  For example, let’s consider the Winter survey (January, February, March).  Phase 1 of Winter drops January and adds April, so the Phase 1 of Winter is the average of February-March-April.  Phase 2 of Winter then drops the then oldest month (February), and adds the latest (May) to give the average from March-April-May.  The next month (Spring book) then consists of April, May and June.

 

Two things occur when each trend or book is released:  First, the oldest month is dropped, and second, the most current month, is added.  In the example you describe, when you went from a 3.0 to a 3.5, you could have dropped a low first month, or added a strong third (current) month, or a combination of both.  This is why PDs like to “extrapolate” the most current month.  Using some basic math (or software that is readily available), you can find out if the uptick is due to dropping a low old month or adding a strong new month.  This is the value of using Arbitrends and extrapolating.

 

Now you need to remember that any particular month is only about one third of the sample size for the 3-month period, and fluctuations in the data because of the lower sample size will occur.  But it should give you a direction for the station.  If you are down four or five individual months in a row, you have a trend and a problem.  If you see a trend like this occurring, you can alert the sales manager that the next book will probably be lower than your current book.  Or if you are trending up, you can try to hold the spot rates on your station because you are likely to have a better book with the next survey release.

 

When your station does contesting or a major advertising campaign, do they do it for the entire survey?  Unlikely.  Budget limitations restrict the length of most campaigns.  This is another reason for changes.  One study I did for a major broadcast group showed their big-money contest gave them a 20% increase in listening over the previous year’s period in the few weeks they did the contest.  Since we only have 100 share points to work with, the listening had to come from some other station.  It’s best to look at monthly data and think about which station was advertising or contesting to see if the change in listening habits was due to these limited-time events.  Most PDs in radio today believe advertising and marketing work—and they work well if done right.  This is another reason for so-called “fluctuations.”  It’s actually a change in listening behavior for part of the survey.

 

Remember, each month is part of the quarterly survey, and changes occur.  Some are created by us, others due to sample size.  What you want to look at is the trend—on an individual monthly basis, not as a three month total—to see if the change in estimated audience size is tracking month by month or just fluctuating.”

 

Roger’s comment:  Bob mentioned extrapolation in his answer.  For an explanation of that, see "Extrapolation" below.


Arbitrends and Sound

Two-part question, Doc.  First, why is it that everyone says, “It’s only a trend?” then why even bother looking at the trends?  Second, how come some stations that don’t sound that great (lousy production and imaging, bad formatics, on-air mistakes) do well, and some stations that sound like they’re doing everything right, can’t get ratings? - Sandy

 

Sandy:  Good questions.

 

1.  For those who don’t know, Arbitrends is a monthly service by Arbitron that shows radio station listening between quarterly survey periods in all continuously measured markets.  The estimates are based on a three-month rolling average.

 

If I remember correctly, Arbitrends was started because radio broadcasters were too nervous (paranoid?) to wait for the book.  Do you recall the philosophy I have mentioned many times in this column?  (Find out what they want, give it to them, and tell them that you gave it to them.)  Well, radio broadcasters didn’t want to wait for the ratings book and Arbitron developed Arbitrends to satisfy the demand.

 

The thing about Arbitrends is that radio people interpret them in two ways.  If the trend is up, the “interpretation” is something like, “Hey, we’re UP…we’re on a roll…we’re doing something right.”  If the trend is down, the “interpretation” is, “Don’t worry about it, it’s only a trend.”

 

The blame for this dual interpretation shouldn’t be directed at Arbitron.  The blame should be directed at the users.  Arbitron merely provided over-anxious radio broadcasters with the requested information.  In addition, Arbitron provides a disclaimer for Arbitrends:

 

“The estimates provided by Arbitrends are derived from the diaries that provide the data in the Local Market Reports and are subject to the limitations stated in those reports.  Due to these limitations, inherent in Arbitron’s methodology, the accuracy of Arbitron audience estimates cannot be determined to any precise mathematical value or definition.  Arbitrends is not part of The Arbitron Company’s regular syndicated service and is not accredited by the Media Rating Council (MRC).”

 

In other words, Arbitrends are very broad indications of radio listening that may or may not be supported in the book.  You ask why people should look at trends.  My answer is: There is no reason to look at them.  Arbitrends are for people who “gots ta know,” regardless of the statistical reliability of the data.

 

Now on to your second question…

 

You ask, “How come some stations that don’t sound that great (lousy production and imaging, bad formatics, on-air mistakes) do well, and some stations that sound like they’re doing everything right, can’t get ratings?”

 

Well, Sandy, the problem here is that you say the production, imaging, etc. is lousy.  You say that some stations “sound like they’re doing everything right.”  These are subjective judgments that may not be shared by average radio listeners.  My years of experience in radio research show that average listeners don’t care much about “good” production or “good” imaging or “good” formatics.  And they don’t care much about on-air mistakes as long as the mistake isn’t something like being off the air for 30 minutes.  In other words, what is important to you as a radio person may not be important to a radio listener.

 

To answer your question, you would have to ask the listeners.  They will tell you why the radio stations that are doing things “right” aren’t interesting to them, and they will tell you why the radio stations that are doing things “wrong” are the radio stations they listen to.

 

I’m not suggesting here that average listeners want “junk” radio.  That’s not true.  What I am suggesting is that you can’t equate your evaluation of a radio station to how listeners evaluate the same radio station.  For example, you may say that radio station A has “great formatics, a great flow, is very tight,” and whatever else you want to throw in.  A listener may evaluate the same radio station by saying, “I don’t like it because the DJs talk over the music.”  There ya go.


Bouncing (Up and Down) Ratings

Hi Doc: Love the column.  Our station, and others in the market, suffer from being up in one book and down the next.  You can go back and look at about three years of ratings and see a distinct pattern of being up in the Spring and Fall, and down in the Summer and Winter.

 

I have been here about a year and a half and am trying to get a handle on this.  We're not doing anything drastically different from one book to the next, so I can't figure out why one would be great and the next be mediocre.  Have you seen this in other markets?  Anything specific we should look at?   It would be nice to have two good ones back to back!  Thanks! - Anonymous

 

Anon  I'm glad you enjoy the column.  Thanks, and on to your question . . .

 

Have I seen this problem before?  Yes, I have seen the same thing in virtually every radio market for the past 30+ years.  The reason for the "bouncing" ratings is due to Arbitron's practice of using different samples in each ratings' period.  Different samples mean different people and different sampling error.

 

I'll repeat . . . The numbers of virtually every radio station in the United States are likely to "bounce" around from one book to another because different samples are used for each ratings' period.  The only way to stabilize the numbers is to use the same respondents (a panel study design) over several books or several years.  This is what Nielsen does for its metered sample, and the data are very stable.

 

Arbitron knows the problem exists, and it has been around since Arbitron stated in 1949.  However, the company claims that it will use a panel study approach for its Portable People Meter.

 

Two major characteristics change when different samples are used for each book:  (1) The number of people involved in the book (in-tab diaries); and (2) The demographic composition of the sample.  You can check the stability of your samples by looking at the weighting used for each book.  I guarantee that the weighting numbers in your books are dramatically different.

 

Because different samples are used, you'll always have a different number of males, females, and age cells who participate.  If there is a shortfall in any particular area, the diaries are weighted to compensate for the shortfall.

 

Finally, since Arbitron does use different samples in its diary methodology, it isn't scientifically legitimate to compare one book to another without converting the ratings and shares to Z-scores.  Virtually everyone involved in radio treats Arbitron ratings and shares as real numbers, but they aren't real—they are only estimates that must be interpreted with sampling error.  This is never done, but that's life.

 

What can you do about it?  You can't do anything about the methodology, but you can do something about how the data are interpreted, and that involves using Z-scores.  But you, and everyone else in the radio industry, probably won't do that.


Condensed Market

Hey Doc: What is the difference in a condensed vs. regular rated Arbitron market? - Anonymous

 

Anon:  I sent your question to Bob Michaels, VP Radio Programming Services for Arbitron so I wouldn’t mess up the definition.  He said, (thanks to Bob for the help):

 

A Condensed Market is a market that is small and, normally for cost reasons, desires a lower price and sample size for its Arbitron report.  Some "smaller" markets have, over time, requested “upgrades” to a Standard market, which uses a bigger sample, is higher priced, and has a bigger book, which means that the book has many more pages and more demo and daypart breakouts than a Condensed market.  Plus, the Standard book has other data pages that a Condensed book doesn't have.

 

Arbitron’s explanation of a ratings book shows the data included in the Standard and Condensed book—click here for a PDF file.


County Numbers

Hey Doc.  Are Arbitron county numbers available anywhere for viewing, much like the bigger markets on the All Access website?  Numbers for our county were released last week, but since we didn’t purchase the results, I’m having a helluva time locating this info.  Any ideas? - Anonymous

 

Anon:  I sent your question to Bob Michaels, Arbitron’s VP/Radio Programming Services.  He wrote and said, “I'm sorry to say that we do not publicize our County Coverage data in any public forum.  These data are only for the private use of our subscribing stations.”


 

Click Here for Additional Arbitron Questions

 

A Arbitron B C D E F G H I J K-L M
Music-Callout Music-Auditorium N O P Q R S T U V W-X-Z Home

All Content © 2015 - Wimmer Research   All Rights Reserved