Validity - 5
Darn it! Once again, I find it impossible to disagree with you. I suppose I should leave New York and return to the Mid-West where ethics, rationality and sanity apparently still exist.
[However] . . . there are some managers who manage only by the research and they do not perform as well (you'll probably challenge this) as those who go beyond the research.
This is what I'm whining about in radio. I'm finding the vast majority of managers, at all levels, are going only by the research or making purely statistical decisions. They don't seem to be going beyond this—therefore, my complaint about sameness and mediocrity.
Of course, my ex-wife thinks I'm wrong 100% of the time. My eight year old son thinks I'm mostly right, but often finds it necessary to explain the world to me. - Michael
Michael: You’ll notice that I edited your post. I try to keep the questions and answers short. I hope you don’t mind . . . but I understand the examples you used about managers and research.
Here are my last comments on what you wrote:
Yes, I will challenge your statement about managers. You say that you are finding that the "vast majority of managers, at all levels, are going only by the research or making purely statistical decisions." I don’t know how many people are included in your sample of "vast majority." I always apply the "logic brakes" when I hear this type of phrase used when someone explains something to me (also "everybody," "all," "most," and a few others). How many is a "vast majority?" 5? 100? 4,000? I don’t know how many managers you know to determine what a vast majority means.
I also can’t accurately assess what you mean when you say that these people are making decisions only by research and statistics. If these people conduct research to find out what the listeners want so they [the managers] can give it to them, I guess I would have to agree. However, I’m also careful here and would apply the "logic brakes" once again because I’m not with managers 24 hours a day and don’t know how they arrive at every decision.
During my years in the business, the problems I have seen related to people using research almost always relate to one of two things: (1) a lack of understanding of the research process; or (2) poor explanations of the data by the researcher. If these two problems are corrected, managers will use research as a guide, not a bible.
(A version of this article appeared in Radio & Records in 2000)
My Research Doctor column on AllAccess.com has given me an opportunity to understand how much some radio people know about research. Although there are variations, the people who submit questions fall into one of three basic groups:
Those who know nothing about research, admit it, and are willing to learn.
Those who think they know a lot about research but don’t since most of their information is urban legend, myth, and inaccuracies that have been passed down from other people.
Those who know a fair amount about research.
After receiving and answering thousands of questions for my Research Doctor column, my guess is that there are probably an equal number of people in each category. However, what I find is that the people in Group 2 are the most argumentative and the least likely to accept the realities of research. But that’s OK. I’m not criticizing anyone here, just pointing out a fact.
However, the people in Group 2 are the most responsible for creating problems with research, both in research design and uses of research. When asked an opinion about research, these people usually begin their comment with, “Well, it seems like . . .” The “it seems like” comment is the problem because the comment is based on opinion, not fact—and opinions mean nothing when it comes to research. (I discussed the problems with “it seems like” comments in another All Access article and will not pursue it here.)
The people in Group 2 have all the answers about every element of research—uses of research, advantages and disadvantages of research methodologies, sampling procedures, screener and questionnaire design, data analysis, univariate and multivariate statistics, and interpretation of results. As I mentioned, however, I have found that most of the comments (and misleading questions) from the people in Group 2 are wrong.
From the comments I receive, I know that many All Access subscribers read the column every day. Because my space in the column is somewhat limited, I wanted to take this opportunity to expand on one area that has been discussed briefly in the column—validity, the umbrella under which research operates.
When most people talk about research, they usually use two terms—reliability and validity. In most cases, these terms are thrown around loosely and most people don’t really know what they mean. So let’s start with that.
Reliability. Reliability in research refers to whether a research study or methodology produces consistent results (not the same, but consistent). For example, if you conduct music tests with your listeners using a 1-7 rating scale and the tests consistently tell you which songs the respondents like and which they do not like, then your method is reliable. If you get results that bounce all over the place from one study to the next, then your method may be unreliable. (Although there may be other causes for the differences in song scores.)
Validity. There are two types of validity—internal and external. Internal validity refers to whether you are measuring what you think you are measuring. For example, if you conduct a music test to gather respondents’ ratings of songs you play for them but after further investigation you find that the test actually collects respondents’ ratings of music tempo, then your method is invalid.
External validity refers to whether your research results can be generalized to respondents outside your sample. If you conduct a research study and find that your results relate only to your sample and no one else, then you have a problem with external validity. The goal of most research is to select a sample of people from a population, conduct a research study, and then generalize the results to the population. If you can’t do that, then your research will be limited in use.
The remainder of this article concentrates on Internal and External Validity. Because of copyright laws, I need to state this information draws heavily from Mass Media Research: An Introduction, 9th Edition (Cengage, 2010), the college textbook I wrote with Joe Dominick.
I realize there are many strange sounding words and phrases in this discussion, but you need to learn these things to get a better understanding of research. Learning the language of research is a significant step in the process of understanding what research can and cannot do. If you get confused with anything, please let me know.
Conducting research involves control over the situation. If researchers don’t control the entire process, there is no way to know if the results are “real” or the result of some unknown entity. This is referred to as “ruling out plausible but incorrect explanations of results.” The example I used earlier about music tests relates here. You must be sure that your music test actually collects respondents’ perceptions of songs they hear, and nothing else.
The variables that create possible (plausible) but incorrect explanations of results are called artifacts (or extraneous variables or confounding variables). The presence of one or more artifacts in a research study indicates a lack of internal validity and your study failed to investigate what it was supposed to investigate.
Artifacts in research may arise in many ways. Some of the artifacts that can affect a study include:
1. History. Events that happen during a study may affect the respondents’ attitudes, opinions, and behavior. For example, let’s assume that you conduct callout research for your currents and it takes two weeks to collect the data. Many things can happen between the first day of your callout and the last day that may affect your scores. For example, an artist may be featured on TV, or maybe an artist is arrested for drug possession, or anything else. The time when a respondent listens to and rates your hooks may affect the person’s ratings.
History may also affect a telephone perceptual study in the same way. That’s why it’s important to collect the responses as quickly as possible. If the data collection process takes a long time (probably more than two weeks), then the respondents should be coded in reference to when they participated in the survey. Banner points (column headings in tables) can be used to separate the respondents according to the time when they participated. The point to keep in mind is that the potential to confound a study is increased as the time increases between when the first respondent is tested (or asked questions) and when the last person is tested.
2. Maturation. A respondent’s biological and psychological characteristics change during the course of a study. Even getting tired or hungry may influence how a respondent respond in a research study. A good example of this situation is with music tests where some research companies test 600+ songs in one session. It’s often easy to spot respondents who are bored with the testing process and this may affect their scores.
Another example of maturation is in focus groups. If the moderator does not conduct the group properly, respondents will often display signs of boredom or anxiety. In these cases, their responses may not be legitimate.
3. Testing. Testing itself may be an artifact. Although not used frequently in radio, research using pretests and posttests can cause problems—a pretest may sensitize subjects to the material and improve their posttest scores regardless of the type of experimental treatment given to them.
For example, assume that you select a sample of your listeners and give them a test that asks them questions about your radio station. You then show the respondents a few TV spots to find out if the spots are effective in communicating information about your radio station. After viewing the TV spots, you give the respondents the same test they took before seeing the spots.
Let’s say that the test results show that the TV spots do increase your listeners’ knowledge of your radio station. However, this may not be correct. It may be that the respondents learned how to answer the questions when they first took the test and the TV spots had nothing to do with the increase in understanding of your radio station.
4. Instrumentation. This is also known as instrument decay and refers to the deterioration of research instruments or methods during a study—equipment may wear out, hooks may be prepared differently from the beginning to the end, and respondents may become more casual in recording their responses.
Another example of instrument decay relates to perceptual studies, whether they are conducted on the phone, on the Internet, or some other way. To be most useful, a questionnaire must be uniform in its approach. You can encounter instrument decay in perceptual research if your questionnaire uses a variety of ratings scales, includes ambiguous, misleading, or double-barreled questions, and many other things. The design of a questionnaire is very important and it’s not as easy as most people think it is.
5. Statistical regression. This artifact may be present in a variety of ways. It basically refers to the fact that items, concepts, or anything else that is rated either very high or very low tend to regress (go toward) the mean (average) of the group of items when the test or measurement is conducted another time. This is evident in music tests where a high scoring song in one test may be rated lower (closer to the mean) in another test. The regression toward the mean phenomenon has been recently introduced into the analysis of stocks. It is common now to hear stock market analysts discuss the idea that leading stocks tend to fall (toward the mean) and under-performing stocks tend to rise (unless there are extenuating circumstances that create the rise or fall).
6. Experimental mortality. While any research project faces the possibility that subjects will drop out for one reason or another, the problem is compounded in long-term (longitudinal, panel, or tracking) studies. This artifact will become more important in radio research as more radio stations use tracking studies and panel studies on their web sites.
If you ever plan to follow the same respondents for any length of time, you must consider that some people will drop out of your study. If you want to track 100 listeners, then you’ll have to recruit 120 or more at the start of the study to account for those who will drop out.
7. Sample selection. The type of people included in a research project is obviously very important. In most cases, it is necessary to ensure that the respondents are homogenous (similar) in many respects. For example, it wouldn’t be very wise to include people who prefer hard rock music in a music test for a soft AC radio station (unless there was a specific reason). Screeners for music tests and focus groups and screener questions for telephone studies usually are designed to ensure that the sample is somewhat homogeneous.
A continuing sampling problem I see (in both radio and non-radio research) is that clients demand unrealistic samples—the screening requirements make it almost impossible to find qualified respondents. For example, a PD or consultant (or someone) asks for females 25-29 who are P1s to WAAA, listen to WAAA’s weekday morning show most often, cume WBBB’s morning show, select a specific music montage, participate in contests, and listen to the radio at least 4 hours a day (and so on). These multi-level screeners define very small populations and the clients get upset when the research company can’t find qualified respondents.
Remember that you limit your potential sample with every screener requirement you include in your screener or questionnaire. What you don’t want to do is “screen” yourself out of an audience. If you make changes on your radio station based on the results of unrealistic samples, you will surely fade away in Arbitron. Radio is a MASS medium, not a medium designed to entertain a handful of people.
8. Demand characteristics. This relates to a respondent’s reactions to a testing or data collection situation and is also referred to as “prestige bias.” A respondent’s awareness of the testing or data collection procedure my influence how the person responds to questions. For example, it is known that some respondents who recognize the purpose of a study may produce only "good" data for researchers. Some respondents don’t want to appear uniformed or dumb, so they will provide answers they think the researcher wants to hear or see—the research situation “demanded” answers and the respondents will provide them.
9. Experimenter bias. Researchers can (knowingly or unknowingly) influence the results of a project by mistakes in observation, data recording, math computations, and interpretation. Focus group moderators are particularly susceptible to influencing the responses of the people in the group. (One way to identify a good moderator is to see how the person responds to respondents’ comments. Good moderators are always neutral in their reactions—nothing affects them.)
Bias can also enter into any phase of a research project if the researcher is influenced by a client who wants a research study to produce specific results (this does happen). The best thing a researcher can do is to ask the client not to discuss the intent of a research project beyond what information is needed to design the study and collect the data.
10. Evaluation apprehension. This is similar to demand characteristics, but emphasizes the point that respondents are usually afraid or hesitant about being measured or tested. It is important for a researcher to do everything possible to ensure that the respondents are comfortable with the situation and not afraid to answer truthfully. Sometimes this isn’t easy to do.
11. Causal time order. The organization of a research project may affect respondents’ answers and interpretation of the data. For example, in a focus group to test various types of direct mail, the respondents’ answers may vary if they are first shown several direct mail pieces and asked to rate them, or if the process is reversed and they discuss the good and bad points about direct mail before they rate sample pieces.
12. Diffusion or imitation of treatments. In situations where respondents participate at different times during one day or over several days, or where groups of respondents are studied one after another, respondents may have the opportunity to discuss the project with someone else and contaminate the research project. This is a special problem with focus groups when one group leaves the focus room at the same time a new group enters.
These are some of the main sources that affect internal validity. As you can see, designing and conducting a research project isn’t as simple as asking a few people some questions and then trying to figure out what they said. It takes more than that.
Keep in mind that all scientific research is subject to error. It is better to know this and attempt to reduce error than to be ignorant about it and conceal the errors.
External validity refers how well the results of a study can be generalized to the population from which a sample was selected. In other words, a study that lacks external validity cannot be projected to other situations; it is valid only for the sample tested. Results from a music test with 100 respondents wouldn’t be very useful if the results couldn’t be generalized to other listeners.
There are three primary ways to help ensure external validity:
*Use random samples.
*Use heterogeneous samples.
*Select a sample that is representative of the group to which the results will be generalized.
*Repeat the study several times.
You should consider external validity in every phase of your research project, from initial discussions to the presentation of the results. Always ask yourself something like, “Can I generalize these results beyond the sample?” If your answer is “no,” then you need to redesign the project.
As I mentioned at the beginning of this manuscript, research involves an understanding of many things in order to ensure that a study is valid and reliable. There are many items to consider in project design, screener and questionnaire development, sampling, data collection, and data analysis. If you don’t understand something about research ask, don’t just rely on someone who says “It seems like . . .” Ask for facts, not opinions.
Vanity License Plate
Doc: This isn’t a real research question, but I need your help. I am amused by vanity license plates and always try to figure them out. I saw one today that has me totally stumped. The plate is: IMYY4U. Do you know what this means? Also, if you figure it out, can you tell me how you arrived at your answer? - Anonymous
Anon: I’m not sure if I have ever been asked to explain how I arrived at an answer. That’s an interesting question.
First, I need to tell you that I have seen the same license plate. Unless you live in Colorado, it appears that there is some copying of ideas going around. But anyway….Yes, I know what it means and this is how I arrived at the answer. I’ll use words to replace the letters and number:
I first read it as, “I’m why why four you.” That makes no sense.
Then I said, “I am why why for you.” No sense again.
I stared at it and thought about the two “Ys,” which, when I said it aloud, gave me the clue: “Too wise.”
The answer was then clear: “I am too wise for you.”
There’s your PL8.
Vasectomy Symbol / Design
Doc: As a student of yours at the University of Georgia many years ago, I remember that you used to give a t-shirt with a design you created on it to the person with the highest score on your tests. I can't remember the design, but I remember that it was funny. Do you still have the design? - Mark
Mark: Nice to hear from you. Yes, I remember the design. It's my depiction of a vasectomy using the international "no" symbol (red circle with diagonal line) and a little "swimmer." Here it is:
As I explained in another answer on my column, one of my hobbies is painting, but I'm not talented enough to paint real things, so I only paint abstract things. I always try to paint something that means something to me, rather than just a bunch of lines and blotches. I painted the vasectomy design in the mid-1970s and the symbol is copyrighted. (The photo is a scan of the sheet I sent to the Copyright office when I submitted my application, not a photo of the actual painting.)
This symbol became a lesson for me in reference to learning about individual differences and how people see, perceive, and understand things. After my copyright was granted, I had a few hundred t-shirts printed with the vasectomy design that I thought would be purchased only by men. I thought this because, in my opinion, the symbol means—None can get OUT (the little swimmer depicted in black). My surprise was that mostly women liked the t-shirt because from a female perspective the symbol can also mean—I don't want any of them IN (the little swimmer depicted in black).
All things being equal (down payment, monthly payments, APR, estimated annual usage) which would you recommend, leasing a vehicle for 3 years or financing a vehicle over 60 months for purchase? What if you estimate using the vehicle for more mileage than the lease allows? - Anonymous
Anon: This isn’t an easy question to answer because of the number of variables involved. However, in your example, you would be saving a few dollars with the lease, but you would not have an asset at the end of the payments.
An auto lease is good for a person who doesn’t like to keep cars for a long time and doesn’t care about owning the vehicle. However, remember that if you purchase a vehicle, you are purchasing an asset that decreases in value every year. You probably know that most vehicles are not investments.
I set up a Google search for you to find calculators that help to determine if you should lease or purchase. Just click here: Lease/Purchase.
Viewer Discretion Advised
Long time no hear or in this case, see. If a TV channel posts a message saying, "Viewer Discretion Advised," can it air pretty much what it wants? I have seen unedited versions of the movie "Saving Private Ryan" on networks. If so, why can't we have disclaimers like that on the air and air what we want? - Nikki
Nikki: The TV Ratings System (click here for explanations of the terms), also known the TV Parental Guidelines, started in 1997, and was established by the National Association of Broadcasters, the National Cable Television Association, and the Motion Picture Association of America. The ratings are displayed on the television screen for the first 15 seconds of rated programming and, in conjunction with the V-Chip, permit parents to block programming with a certain rating from coming into their home. The TV Ratings system has been in place since 1997. (Edited from NBC.)
By the way, the "viewer discretion" warnings have been used on prime time movies since September, 1987.
The FCC has a lot of information about inappropriate programming and language on television, and you can read all about it by clicking here. In particular, scroll down to the section titled, "The Law," and you'll understand a lot more about inappropriate materials.
If you read the FCC website materials, you'll see that the Commission has established what they call a "safe harbor" for adult material, which runs from 10:00 p.m. to 6:00 a.m. During this time, a TV station or network can air almost anything within the confines of the FCC programming rules. For example, the FCC says that, "Obscene material is entitled to no First Amendment protection, and may not be broadcast at any time."
The time from 6:00 a.m. to 10:00 p.m. is considered the time when children usually watch TV, and during this time, a TV station or network must air a "Viewer Discretion" message if the material may be considered too "mature" for young children. However, TV stations and networks try to avoid any complaints from viewers, so to avoid problems, you'll see the "Viewer Discretion" warning on many TV shows, even such programs as "Cops," which may be considered violent or profane by some viewers.
You asked if TV stations and networks could air just about anything they want. The answer is "no." They can air most things, but not programs that violate FCC rules and regulations.
Addendum: I think you might be interested in this article about the networks taking the FCC to court in reference to the FCC's rules about profanity and obscenity—click here.
Voice - Caffeine and Smoking
Hi, Doc: I love your column. Hope all is copasetic in Denver. I'm in radio and of course, my voice is very important to me. Two questions, please.
1. A radio buddy told me caffeine is bad for the voice? Is that true? I go to Starbucks daily, so I am interested in your response.
2. Of course, smoking is really injurious to one's health. But putting that aside for one moment, does it "deepen" one's voice? Again, I wouldn't smoke and even if it "improves" one's voice it's not worth it. But, does it, because a radio friend said he only smokes because it improves his voice? Any truth to this?
Thanks again for your column. I learn something new from it daily. Thanks. - Anonymous
Anon: I’m glad you enjoy the column and I’m happy to hear that you learn things. Thanks. Here are your answers (Disclaimer: I’m not a medical doctor, so do not interpret this as medical advice):
1. I can’t find anywhere where a “normal” use of caffeine is bad for the human voice. I can find no evidence suggesting that your daily load of Starbucks coffee will harm you. However, if you are concerned, see your doctor.
2. Your buddy smokes only because it “improves” his voice? Oh, please. Your buddy is a hosing you. Your buddy smokes because he’s addicted to the nicotine.
Should you (or anyone else) smoke to give you a “better” voice? Yea, sure. Go ahead. And along with your neat sounding gravely, mucous laden, baritone/bass voice, you’ll also be privileged to experience things like coughing and hacking, upper respiratory problems, variations in blood pressure, mouth and lung cancer, and a bunch of other cool things.
I have heard of some wild reasons why people smoke, but a “better voice” is a new one for me.
Thank you for the great column! It is a great asset. I have two questions:
1. Radio is flooded with voice tracking. I’m a jock who never get the pleasure of doing a live show. There are sweepers that tell the listener what station they are on and promos that inform them about our contests, web page, etc. I find myself wondering what my purpose is on the air. It is as if I am just another commercial since I can’t play requests or answer questions. So, my question is: Why would a station keep a voice tracker when it seems that there is no real need for one? It seems like the only necessary time slots are those that have giveaways. Do you think that the jock slots after 7 p.m. will become obsolete? Or, do listeners really care to know the artist and album making the jock necessary? And, how do I refrain from sounding like another commercial?
2. Being that I am a rookie, I am starving for all the direction that I can get. Especially other talent that I can look up to and learn from. In my world, most on-air stuff is voice tracked; so all those breaks are saved into the computer system and I would like to record some breaks to as examples for my own personal improvement. Is this ethical? Do those voice tracks belong to anyone? Should I ask permission, and if so, from whom—the talent or PD? Thanks! - Anonymous
Anon: You’re welcome for the column. I’m glad you enjoy it. To your questions:
1. While the emphasis in the early years of radio was on making money, there was an almost equal (or greater) concern for providing quality programming. In many cases, the shift now is to an emphasis only on making money (profits…oh, and a concern for "shareholder equity").
The shift in emphasis has caused management to look at any way to cut expenses. Remember that a company’s profit is a function of two items: increased sales and/or decreased expenses. If a company cannot generate increases in sales, expenses must be cut. One expense is with on-air personalities. If one person can do, via voice tracking, two, three, or four shows, there is no need to hire two, three, or four jocks. That’s the reality of today’s radio.
Do listeners care? My research shows that listeners do care if a jock is live or voice tracked. But that doesn’t make any difference. It’s not as important to management as cutting expenses. So on-air people are stuck. If they want to stay in the business, they must accept voice tracking. The only way I can see this changing is if listeners tune out because they want live personalities. Then some genius will determine that radio needs to go back to the basics and will suggest adding live jocks.
You may not agree with this approach, but your opinion doesn’t matter to the people who own and operate radio stations. The radio station or company’s profit is the most important element in running a radio station. Those who are in charge will do anything, including eliminating all live jocks, to increase their bottom line.
2. I’m not an attorney, but I can’t see anything wrong with you taping or recording breaks as long as you don’t sell the information to anyone else. People record things from radio all the time. Do it from home.
Voice Tracking Defined
Can you let me know what voice tracking is? Does it pay well? - Anonymous
Anon: If you have been reading this column for a while, you'll know that I don't like to reinvent the information wheel (so to speak). With that in mind, I'd first like you to read this page from the Internet.
In addition, you might also want to look at some of the references in this search.
Let me know if you have any other questions after you read some of those articles/references.
Does voice-tracking pay well? I have heard many stories, but most seem to lean toward the idea that unless a person has a lot of radio stations, voice-tracking merely pays the bills.
Voices in Demo Tapes
What is the policy if you want to send out a production demo and some of your really good work has another person’s voice (maybe even a local personality) in the spot, bit, or whatever. Is there a policy on it?
I am asking because I want to apply for a position, but if a position opens up in the market I am in now, will I get in any trouble for using stuff with other people's voice. Thanks. - Anonymous
Anon: I’m not an attorney, so I sent your question to my twin brother, Rick, who is an attorney in Chicago. I’ll paraphrase what he said.
Rick’s educated guess is that you’re probably safe in sending out the tape unless the other voice is a readily recognizable personality. However, he said that it isn’t very likely that anyone would consider this type of use a big deal even if he/she is well known. The only problem would be if the other person says that your use of his/her voice placed them "in a false light," or made them appear in a demeaning way.
Rick suggested that you check with a local lawyer regarding your state’s laws on the use of another person’s voice or character for commercial gain. His guess is that job applications are probably not considered use for commercial gain and, again, as long as it isn’t disparaging to the person, it’s not likely to lead to anything. To be on the safe side, check with a local attorney.
(Harry) Volkman and Old-Time Video Effects
Howdy Doc: When I was a kid in the 60s, a TV weatherman named Harry Volkman (on WMAQ-TV in Chicago) used a special effect on his maps that I never saw anywhere else. This pre-dated computer-generated images by many years. The maps were absolutely tangible, physical.
A map might show several graphic features, such as the sun, a black cloud with lightning, a large arrow, a rainstorm. Each of these features, which could be removed from and then replaced on the map, were somehow animated. The rays would appear to move outward from the sun. The bolt of lightning would flash. The rain would appear to fall from the storm cloud. The line demarcating a cold front would pulsate and undulate.
I might add that these days, Mr. Volkman's son Eddie, is one-half of the longtime morning show on WBBM-FM in Chicago.
My question is, how did these video effects work? Chroma key did not seem to be involved. Thanks! - Geno
Geno: As I mentioned in my first post, I was pursuing your answer and would post the information as soon as I received an answer. Well, I have received your answer, and here it is.
I sent your note to Eddie Volkman at B96.3 in Chicago (WBBM-FM). Eddie forwarded your question to his dad, Harry, and he provided this explanation:
The process was called technimation, which I believe was a made-up commercialization title. The symbols, and I still have some of them, were made with shiny plastic strips that would reflect light from a spotlight that had a spinning disk in front of it. The disks would polarize the light so that it would vary in intensity, and as the light would reflect off the symbols they would appear to be rotating or flashing. We were the only station in Chicago to use this procedure. It was exclusive to one station in each market.
If Mr. Wimmer would like to see one of these symbols, maybe I could arrange to meet with him somewhere.
There ya go, Geno . . . straight from Harry Volkman himself, and I'd to thank both Eddie and Harry for their help.
Hey Doc, what the heck is a Vulpes Fulva? You mentioned that in your 4/16 quote to Anonymous . . . "It is as plain as the tail on a Vulpes Fulva." - Dano
Dano: Several years ago, my twin brother saw a photograph of an animal hanging on an office wall. At the bottom of the picture was the name "Vulpes Fulva." He thought the name was odd for what it described and he told me about it. I thought it was funny too . . . to find out that the scientific name for a Red Fox is Vulpes Fulva. Do a search on the Internet for the term . . . you’ll see a picture of a Red Fox.
I explained this story to my neighbor. Our houses back up to an area of about 1,000 acres of open space where, coincidentally, several families of Vulpes Fulva reside. Every time one of the little beasts walks or runs through our yards, I run outside and yell (very loudly), "Vulpes Fulva! Vulpes Fulva! Vulpes Fulva!" All of my neighbors love me and they wait in eager anticipation for the next Vulpes Fulva visit. (I just constantly amuse myself.)
All Content © 2015 - Wimmer Research All Rights Reserved