Long before the official campaign period started, local opinion polling firms had been coming out with survey results purporting to show voters’ preferences of declared and undeclared candidates for the presidency, the vice-presidency and the Senate. Because of their frequency and number, these polls have shaped perceptions, here at home and abroad, of the relative standings and chances of the candidates.
Candidates, would-be candidates and political parties have taken these survey results at face value as scientific, accurate and totally above-board. The mass media have passed them on freely without any critical analysis, and not a small portion of the public appears willing to accept them as gospel truth. Public discussion of the merits of the candidates and their respective political platforms, if any, has thus been thrust aside in favor of this undivided attention to the surveys.
This gives the polling firms an excessive and unaccountable power they have not earned. There is no guarantee that this power will not be used to tax the public interest; there is, in fact, some doubt that the common good ever figured in the recent surveys. Based on the existing trade literature, not only is the methodology of the surveys fatally flawed, the pollsters have also failed to rise to the high professional and ethical standards of opinion polling in the more advanced countries, notably the United States. Thus, while claiming to serve the public interest, the surveys may have, in fact, only served some special political and commercial interests.
We submit:
First, that in these surveys local pollsters have used methodologies and techniques that are flawed and discredited and have long been discarded in the United States, where public opinion polling was invented and is now a billion-dollar industry.
Second, that local pollsters have ignored the strict ethical and professional standards that polling associations and professional pollsters elsewhere regard as sacred and inviolate.
Third and finally, that in reporting the survey results, the media have become unwitting purveyors of false findings to the detriment of the public and the electoral process. Unsuspecting journalists have failed to ask the necessary questions that in the US and other countries are standard prior to publishing survey results.
The business of public opinion polling assumes that by getting the opinions and preferences of some 1,500 to 2,500 citizens, pollsters can capture accurately the opinions and preferences of 94 million people or 48-million voters.
But “there are no perfect surveys. Every survey has its imperfections. The world is not ideally suited to our work,” says Reuben Cohen, former president of the American Association for Public Research (AAPOR). Thus not even the best pollsters dare to claim that opinion polling is a science.
Given its limitations, every poll survey must be done with extreme care. This is the duty of every pollster, and it is our right ---the nation’s right --- to demand the highest professional and ethical standards from the polling firms.
Sad to say, these standards have not been observed at all in the recent surveys.
I put on exhibit first the practice of using face-to-face interviewing in the surveys. This is the standard method used by local pollsters for eliciting responses from survey participants. They say so in their own reports.
In this survey method, respondents are tracked door-to-door and interviewed by the pollsters’ personnel in the field. They are asked to respond to the pre-set questionnaire and shown pictures of candidates as appropriate.
In the past, face-to-face interviewing was viewed by US opinion research experts as an appropriate method for conducting opinion surveys. It ostensibly allowed them to select the “right” respondent to be interviewed. After major failures, however – notably, the erroneous forecast of Thomas Dewey’s victory over Harry Truman in the 1948 US presidential elections– this survey method was abandoned, so much so that reputable pollsters in the US have now discarded it altogether.
Why was this? We invite some experts to tell us why. Chava Frankfort-Nachnias and David Nachmias in Research Methods in the Social Sciences write: “The very flexibility that is the interviewer’s chief advantage leaves room for the interviewer’s personal influence and bias.”
The pollster Kenneth Warren in his book, In Defense of Public Opinion Polling, says: “The cons of door-to-door interviews far outweigh the pros…Because of the sensitivity or personal nature of some questions, interviewers, because they were placed in face-to-face situations, have admitted that they sometimes guessed or fudged responses…These problems are a major source of bias in personal interviews, causing significant contamination of the poll data.”
These methodological and practical problems, according to Warren, doomed face-to-face interviews forever. By 1980, nobody in the US wanted to pay for this type of “fatally flawed and grossly inaccurate” surveys.
This, however, seems to have had no persuasive effect on our local pollsters.
A second glaring weakness is the extensive and general use of quota sampling to create “a representative sample” of the Philippine population. In quota sampling, survey respondents are picked from different types of people (e.g. by age, sex, religion, income) and various predetermined areas (e.g. region of country, as well as urban or rural).
This method is the most familiar form of non-probability sampling. It is supposed to mirror the same proportions in the targeted survey populations, but doesn’t. And it proved to be an earth-shaking failure in 1948 after three leading US pollsters--Gallup, Roper and Crossley—erroneously called the US presidential election in favor of Dewey instead of Truman. In the United Kingdom, where it persisted, it was blamed for the failure of the pollsters to predict Prime Minister John Majors’ victory in 1992.
“Quota sampling could never work in practice,” says Professor Warren. “Not only could pollsters not know the exact demographics so they could pick a representative sample that actually reflected the proper demographical proportions, but it was naïve to think that the interviewer could manage to interview the precise people needed to fill each quota.”
Thus today, reputable US pollsters rely almost exclusively on probability random sampling to create a “representative sample,” says Warren.
Why then do local pollsters continue to use quota sampling and face-to-face interviewing for their surveys? Why haven’t they adopted probability random sampling, which has protected US opinion polls from using contaminated data?
Of course, the same methodology is also still used in Eastern Europe, Africa and Latin America. But that is no excuse, given the high claim our local pollsters make for the supposedly advanced state of knowledge in their trade.
The situation would not have been so bad were the surveys meant simply and solely for the private consumption of clients. But as opinion polls have become a hot commodity and the stakes and rewards have gone up, pollsters have been led to make bigger and bigger claims for their products and thrown standards out the window.
Professional standards are virtually non-existent in the local opinion polling industry. No law regulating the conduct of opinion polling, and no professional association of pollsters either to set and enforce standards of conduct and standards of disclosure and ensure “the reliability and validity of survey results.”
There is a professional association of market research firms--MORI (Market Opinion Research Inc), but market research is markedly different from public opinion research. Consequently, opinion polling firms can pretty much do what they please. They set their own standards and parameters for the conduct of their polls. And they release findings virtually at will.
In the US, public opinion polling is not regulated by law but is self-regulated through the National Council on Public Polls (NCPP) and the American Association for Public Opinion Research (AAPOR).
Both associations provide the principles and standards of disclosure, which include, among others, the following:
1. Who sponsored the survey, and who conducted it?
2. What is the sampling method used?
3. What is the population that was sampled?
4. What is the size and description of the population that serves as the primary basis of the survey report?
5. The exact wording of questions asked, the order in which they were asked, the text of any instruction or explanation to the interviewer or respondent that might reasonably affect the response.
6. A discussion of the precision of the findings, including estimates of sampling error and a description of any weighting or estimating procedures used.
7. Which results are based on parts of the sample rather than the total sample, and the size of such parts.
8. The method, location and dates of data collection.
If we had such counterparts to these private associations, we would not be seeing the extravagant claims for opinion surveys and the excesses by pollsters that we see today. We would have polling firms that are a lot more modest about their work, and a lot more careful about their pronouncements regarding the opinions and sentiments of our 94 million people.
And we would not be searching in vain on their websites for their survey samples and how they were created, the names of politicians they had invited to participate in the survey at P100,00 for every “rider question” about themselves, who accepted the invitation, and what “rider” questions were thrown in.
We cannot complete this presentation without discussing briefly the unwitting part the media have played to allow opinion poll results to dominate public perceptions of the campaign. This would not have been possible if dubious opinion polls had not been reported so energetically in the media without an iota of analysis. The public would have had a better appreciation and understanding of public opinion polling had the media been a little more critical and vigilant.
In the US, the media normally ask the following 20 questions before publishing the results of any opinion poll:
1. Who did the poll?
2. Who paid for the poll and why was it done?
3. How many people were interviewed for the survey?
4. How were those people chosen?
5. What area (nation, state or region) or what group (teachers, lawyers, Democratic voters, etc.) were these people chosen from?
6. Are the results based on the answers of all the people interviewed?
7. Who should have been interviewed and was not? Or do response rates matter?
8. When was the poll done?
9. How were the interviews conducted?
10. What about polls on the Internet or World Wide Web?
11. What is the sampling error for the poll results?
12. Who’s on first?
13. What other kinds of factors can skew poll results?
14. What questions were asked?
15. In what order were the questions asked?
16. What about “push polls”?
17. What other polls have been done on this topic? Did they say the same thing? If they are different, why are they different?
18. What about exit polls?
19. What else needs to be included in the report of the poll?
20. So I’ve asked the questions. The answers sound good. Should we report the results?
The reason for asking these questions is plain enough: there are good polls and bad polls. It appears that every poll should be judged guilty until proven otherwise. Of course, some polls are more reliable than others. But the following key ingredients are needed:
· Adherence to Professional Standards and Ethics
· A Well-developed, Intelligent, Yet Doable Research Design
· A Carefully Drawn and Used Representative Sample
· A Well-designed Questionnaire
· Well-trained and Professional Interviewers
· Careful Coding and Tabulation of Raw Poll Data
· Thorough and Insightful Analysis
Sadly, these ingredients are absent in local polling. Local pollsters have sacrificed rigor to the demands of clients and interest groups or to other considerations. Whether it is the polling firms or their clients who are responsible, we are disturbed by the effort to systematically and deliberately manipulate public perceptions of the candidates in order to narrow down the people’s choice to a few candidates, even before the start of the race.
We are shocked that survey methodologies, techniques and practices that have failed and been completely discarded in the United States and other advanced countries are being used in local opinion surveys without any mention of their limitations.
We deplore the use of these questionable survey findings to condition the minds of the Filipino public and the media who have grown to trust opinion polls, largely because of hype.
To gain full understanding of the deficiencies of local opinion polls, what damage they have inflicted and how they can be corrected, we shall now invite one or two well-known US survey experts to critique local polling practices and submit appropriate recommendations.
In the meantime we are calling on the media to desist from giving further uncritical publicity to these dubious local polls and make an earnest effort on their own to correct or at least mitigate the harm already done. We invite the media to follow the actual campaign of the various candidates and see for themselves how it is being received on the ground.
Finally, in order to come up with correctives and determine culpability, if any, for the harm done to the public by irresponsible opinion polling, we are asking a panel of legal experts to propose the appropriate legal and professional remedies.
Thank you for your attention.
*** This paper was originally presented at the Fernandina Media Forum in Club Filipino, San Juan, on 17 February 2010.
No comments:
Post a Comment