Most people have never participated in an opinion survey, nor have they seen one being conducted. This may sometimes make them wonder if the opinion surveys reported in media are real. I can assure skeptics that they are real. But the probability of being picked as one of the 1,200 respondents in a typical nationwide survey is rather slim. It might be easier to win on a single jueteng bet.
The science that goes into the drawing of a small sample to represent a large population is quite established. Survey “respondents” do not come forward to volunteer an opinion as people sometimes do when they “call-in” or register a vote in a radio or TV program. Respondents are meticulously selected at random from a set of individuals or households falling under a given category or sector.
Having said that, I want to raise some issues about the kind of public opinion that is expressed in recent opinion surveys. This is not meant to diminish the value of surveys, or denigrate the competence of survey firms, or attack the intelligence of their respondents. It is rather a plea for survey organizations to exert a little more effort so that they may capture and interpret public opinion, and account for its movement over time.
I believe that, whether we like it or not, opinion surveys are going to play an increasingly weighty role in our political processes. For better or worse, they are likely to overshadow conventional commentary, such as that found in the op-ed pages of newspapers, as shapers of public opinion or as predictors of public behavior in crisis situations. Thus, more than ever, there is a need to understand precisely what the numbers they churn out represent.
The sociologist Pierre Bourdieu once said: “One of the most pernicious effects of opinion surveys is to put people in a position where they must answer a question they have never thought about.” I can personally attest to that as a sociologist who has had to face journalists who ask general questions about a topic they had not bothered to read about. For example, they may ask: “What is your opinion of the peace process with the Moro Islamic Liberation Front?” To such a question, I would often reply, “What exactly do you mean?” It is how I would probably respond to many of the questions that Pulse Asia Research and Social Weather Stations have asked in countless surveys. Even when I know the subject, I have always been hesitant to offer a ready-made answer to a question that conflates a variety of complex issues into a single proposition.
I doubt if this reluctance is just an idiosyncrasy of professors. It seems to come out in many Pulse Asia and SWS surveys, but is expressed in a number of ways—as “no response,” “don’t know,” “undecided,” “neutral,” “somewhat this or somewhat that,” or any combination of words that allows a respondent to say something on a subject that he may, in fact, be completely ignorant about.
Last week, SWS released the results of a March 2015 survey on “Opinion on Government’s actions regarding the tension in the Scarborough Shoal.” The question asked was: “In general, do you approve or disapprove of our government’s actions regarding the tension between China and the Philippines in the Scarborough Shoal?”
I wonder if any of the respondents had the audacity to ask, which government actions? The filing of the case against China before the international tribunal on the law of the sea? The Philippine rejection of bilateral talks with China? The campaign to get the support of the Asean and Western powers for the Philippine position? The move to build closer military cooperation with the United States and Japan? President Aquino’s warning against a Hitler-style annexation of territory in these disputed waters?
Unfazed by the complexity of the general question put to them, 16 percent of the respondents expressed “strong approval” of the government’s actions, and 19 percent registered “strong disapproval.” I was pleasantly surprised to see that only 5 percent admitted having “not enough knowledge” on the subject.
But, in the report of findings, the percentage of those who “approve” rose to 49 percent, after the number for the “somewhat approve” was added to the “strongly approve.” By the same token, those who “disapprove” went up to 46 percent after the addition of the 27 percent who “somewhat disapprove.”
I am not sure what reasoning lies behind this mixing of two different responses. To me, the qualifier “medyo” or “somewhat” has a closer affinity to indecision than to a strong sentiment of approval or disapproval. This makes it more reasonable to lump the “somewhats” with those who did not answer due to “not enough knowledge.” Collectively, they would be the “undecided”—and they constituted 65 percent of all respondents! On so important a national issue, only 35 percent had any categorical opinion to share.
Indeed, Bourdieu cautioned against regarding data of this nature as public opinion—i.e. as “something that can be formulated in discourse with some claim to coherence.” At best, he says, they offer a glimpse of people’s “profoundly unconscious system of dispositions that orients their choices in extremely different areas ranging from aesthetics to everyday economic decisions.”
If it is public opinion we are after, he argues, it would be better to simulate the situation in real life by asking people to take positions in relation to explicit opinions that have already been formulated and put forward by various groups in the public sphere.
* * *