Apart from Britain’s Labour Party, the biggest loser in the May 7 general elections was the British polling industry. About a dozen survey firms, who together kept an eye on public opinion on a monthly basis for five years preceding the 2015 elections, failed to predict a decisive Conservative Party win. They have, as a result, become the butt of cruel jokes. An American commentator wryly observed, “British polls predict today’s election results by tomorrow’s surveys.”
So serious was the 2015 polling debacle that the British Polling Council has launched an independent inquiry into the causes behind this ignominious failure. The review will look into the “quota variables” used in drawing samples, and the current methods for assigning weights to responses. They will be asking what changes are needed to ensure that the small samples they work with accurately reflect the changing profile of the British voting population.
The value of opinion surveys as a mirror of public sentiment is incalculable. The periodic snapshots they take of the state of popular thinking on a wide range of issues are a crucial source of lessons for policymakers and project managers. Preelection polls give the media a picture of where political parties stand in the public’s estimation at any given moment. From them, parties might derive insights on where they need to exert greater campaign effort, or how to craft their messages more sharply.
British pollsters do not expect to make much money from running surveys. Neither are they in the business of conditioning public opinion by projecting, for example, who is “winnable” and who is not in the run-up to an election. Any firm that does so is likely to be exposed by a community that thrives in transparency. Rather, they take immense pride in the survey instruments and sampling tools they develop, particularly as Internet polling becomes increasingly popular. They draw their ultimate satisfaction from being able to predict electoral outcomes to the last percentage point.
The one thing that mature democracies will not do is to allow surveys to determine suitability for public office. Modern societies take recruitment for political leadership very seriously. They differentiate between those who show great promise for a political vocation, and those who achieve great things in other spheres. The former go through a long and painstaking process of ideological formation and mentoring, usually under the watchful supervision of a party’s political institute. They know that the complex work of modern governance belongs to a team, and cannot be made to fall exclusively on the shoulders of one individual.
It is, of course, not the fault of surveys that, in societies like ours, the survey rankings of potential candidates have become the most important factor in the selection of the nation’s leaders. Our political parties—not to mention the political kingmakers and the voters themselves—seem to pay no heed to any real measure of preparedness and competence in deciding their electoral choices.
The simple explanation for this, I surmise, is that politics in the Philippines has not been able to fully differentiate itself from the other spheres of society. Thus, Filipinos show their admiration for the boxing icon Manny Pacquiao by rewarding him with a political position, as though politics were merely auxiliary to sports or entertainment. They do not seem to mind that Congressman Pacquiao of the lone district of Sarangani was absent 90 percent of the time from the sessions of the last Congress.
If Manny decides to run for the Senate in 2016, chances are he will top the election. The political support for him appears to cut across social classes, religious affiliations, and ethnolinguistic groups. Whether this level of support will endure long enough to carry him ultimately to the presidency is another question.
I think it is here that preelection surveys, if done conscientiously, can be a force for raising political awareness instead of being insidiously harnessed to political bandwagons. What do I mean?
Take a look at the way British survey firms do their polls, and forget for a moment their recent embarrassment. They gather their data from fieldwork interviews, online questionnaires, and landline telephone calls. Conscious that their sample is supposed to represent the British voting population, they weight the responses according to a mix of variables that include respondents’ gender, age, social class, household tenure, number of cars owned, whether they took a foreign holiday in the last three years, how they voted in the previous election, etc. On a scale of 1-10, informants are further asked what is the likelihood that they will cast their vote on Election Day. A score of less than four means the response is completely discounted. Five is assigned a half point, and 10 means the response is counted as one whole response.
This system of weighting allows the pollster to make important differentiations that are otherwise lost when disparate responses are uncritically lumped together. I imagine, for example, an informant being asked, after naming his/her choice, how much thought he/she has given to the selection—on a scale of 1-10. It is reasonable to argue that responses that have little or no saliency to the respondents themselves ought to be discounted.
This is one way of overriding a fundamental weakness of all opinion polls—that not every one who is asked for an opinion necessarily has an opinion, and that not all opinions carry the same uniform weight for the people giving them.
* * *