The sociology of opinion surveys

As a sociologist, I am sometimes asked what I think of the approval ratings politicians and government officials get in opinion surveys. The interest, typically, is in the plausible reasons for the “very high” or “very low” ratings that are reported (particularly when these appear to defy expectations), and not so much on the conditions that may have shaped these results.

Quite often, I find myself offering explanations for results whose actual production I could only imagine, and trust. I warn that what I can give is no more than a guess based on figures whose meanings I do not take as self-evident.

Commenting specifically on the nature of opinion surveys, the French sociologist Pierre Bourdieu put it this way: “What circulates between the science and the non-specialists… is, at best, the results, but never the operations. You are never taken into the backrooms, the kitchens of science.” Knowledge of what goes on in these backrooms is essential to the rational communication of social science findings, say Bourdieu.

I have great respect for the science that goes into survey research. It is most evident in the meticulous effort invested in the drawing of a very small sample that can reasonably represent an entire population with clearly defined characteristics.

For this reason, I do not dismiss survey findings as mere fictions conjured by hired propagandists. There are survey organizations that are run by professionals who take their public function seriously, just as there are fly-by-night poll operators that conduct surveys for the vilest reasons. Polling firms like Social Weather Stations and Pulse Asia Research, that have been around for a long time, have built a creditable record worthy of the public attention they get.

For field research, however, my choice of method is ethnography, which typically requires immersion in the life of a community. Here, the aim is to understand, through participant observation, the way people define their world and their everyday situations—as a condition for interpreting their actions and their communications. One realizes that words and actions are not always what they seem. But, the method has obvious limitations: It is time-consuming, and the findings are not easy to generalize.

In contrast, the field work in surveys can be completed within a much shorter period—no more than a week in the case of SWS and Pulse Asia. Data encoding and analysis may take another 2-3 weeks. Using a sample of about 1,200 respondents, a well-constructed survey can claim, with some degree of confidence, that the findings apply to an entire country’s adult population.

What is gained in speed and generalizability, however, often comes at the expense of assuming too much of the situation of survey respondents. Surveys assume, Bourdieu argues—“at the risk of offending a naively democratic sentiment”—that (1) giving an opinion “is something available to all,” (2) “that all opinions are of equal value,” and (3) that “putting the same question to everyone assumes that there is… agreement on the questions worth asking.”

It is obvious from everyday experience that people don’t always have an opinion on every issue. But, they may give one if they are asked to do so, especially if they are made to pick from a menu of options. (Filipinos are particularly known for their pleasantness as interview subjects.) Clearly, such “opinions” do not carry the same weight as those from respondents who have given an issue much thought. Still, their responses are treated equally in the summing up and production of “public opinion.”

Other than those mentioned by Bourdieu, there are other crucial factors worth considering when one attempts to interpret survey results. Aaron Cicourel, in his book “Method and Measurement in Sociology,” points to what he calls “intrusions which impinge upon survey researchers.” Among these is “the problem of controversial subject matter.”

The ideal is that a survey must ask subjects for their opinions on topics they consider important, on which they are reasonably informed, and which they feel free to express themselves. Anything that interferes with this ideal impinges on the research process. Any show of hesitation on the part of the respondent, or any reservation, has to be noted, evaluated, and considered in the interpretation of the data. This requires of the interviewer a sensitivity and concern for the integrity of responses that may not always be there in the actual field work.

Most importantly, on something as controversial as, say, President Duterte’s performance and his handling of the COVID-19 pandemic, interviewees must be assured there are no “right” or “wrong” answers. Indeed, a guarantee of strict confidentiality is routinely given in almost all opinion surveys. But that guarantee may not mean much to people who cannot tell the difference between a legitimate private survey and one conducted by a government agency. Even if they can, what are the chances that they would not be deterred in their responses by the simple thought that the “kapitan” might know how they answered?

One need not go to the country’s remotest barangays to find people who would readily give “safe” answers than say something that could expose them to unwanted drug raids or to being denied “ayuda.” To people who have felt vulnerable and powerless all their lives—and they are the majority in our country—nothing could be more dangerous than expressing their true opinion about their leaders at the wrong time.