X
Story Stream
recent articles

More than 50 years ago, Seymour Lipset summarized the big discrepancies in the polls about domestic issues, about Presidential candidates, and about foreign policies.  The issues pollsters looked into were eerily similar to those in the fore now.  In November 1975, pollsters asked if people favored giving New York City federal funds to rescue it.  The answers were significantly different when pollsters got results from face-to-face interviews than when they got them over the phone.  In January 1975, Gallup and Harris asked people to choose between methods to reduce gasoline consumption, one being rationing, the other by raising prices.  The wording of the questions was different – and so were the replies:  In one poll, 37% wanted rationing, in the other 60 percent. 

Such differences were typical of every topic pollsters raised that Lipset both looked at and he himself did with colleagues.  Even when using similar methods of selecting responders, done the same time but differing in the wording, the poll results differed drastically.  Here are two examples, relevant now too, showing how Gallup and Harris – among the two largest polling companies at the time - posed questions, though we do not know how their staff weighed the answers or solved the complexities of analyzing data.  Google’s recent failure answering simple, straightforward questions about historical figures, which drew from the way its employees (mis)-organized massive data and wrote biased algorithms, reflects similar issues that polling companies had to deal with too, but were hidden more easily by the veil of words.

Gallup asked whether the U.S. should stay or get out of Korea, the question including the information that “Communist China had forces far outnumbering the U.S.’s.”  In 1974, Harris asked, “In general, with the Russians arming Syria and Egypt, do you think the U.S. is right or wrong to send Israel the military supplies it needs?”   With such wording, in both cases two thirds of responders preferred alternatives.   

Just how much tendentious wording impact published poll numbers, here are some that Lipset summarized in a section entirely dedicated to “The Polls and the Middle East.”  It shows that not much has changed over the last 50 years, neither in methods of polling, nor in tendentious wordings. 

In December 1974, the New York Times published a Harris result stating that 66 to 24 percent favored sending Israel all the military equipment it needs. One month later, a Yankelevich poll published that 57 percent were against doing so. However, in the same survey, 45 percent were in favor of “military aid,” but only 28% when the wording of the question was different. Same month, Gallup came out with just 16 percent supporting specific military aid, with 55 percent saying to stay out of the conflict.  A few months later, Gallup published that 54 percent supported sending either military supplies, or even troops.

Just the wording and the framing of questions make sense of such unreliable cacophony, although there may be other reasons too, related to methods of sampling and ways of drawing statistical inferences, about which below.  The Harris poll asked, “As you know, the U.S. has sent planes, tanks, artillery and other weapons to Israel.  The Russians have sent similar military supplies to Egypt and Syria. In general, [then], do you think the U.S. is right or wrong to send Israel the military supplies it needs?”  The Yankelevich polls’ questions were something very different.  One poll asked, “The U.S. sends arms and military equipment to a number of foreign countries. Do you personally feel that the U.S. should or should not send arms to Israel?”   There is nothing new about judging by “feelings” – rather than what people might actually know.

There were many more not well-defined questions in the polls, with answers ranging from 66% to 16 percent in favor of sending arms and/or troops to aid Israel.  However, Lipset found no variation in the wide discrepancy about the Middle East between academia and the rest of the public when asking if the U.S. is to pursue a more neutral policy there and put pressure on Israel giving in to Arab demands. 74 percent of academics were in favor of the first, and more than 50% for the latter.  The more things change, the more they stay the same.  Polls replicating all the above - are now again in the news, though without mentioning the reservations the polling industry is well aware off about their reliability - especially when they disproportionately poll the younger generations - as the numbers have revealed for a long time.  

The wild fluctuations in polls are not only because the way questions are framed, the way "editors" in polling companies retain or discard answers, but have to do with the ways the polls are carried out, which footnotes in the media never reveal. There is a sampling method called "opt-in" that uses self-reported information from potential participants to select panelists for surveys. Then there is "probability sampling" - a method of selecting units for observation such that each unit in the population has a known, positive probability of selection.   Although the "opt-in method" is known to have much larger errors than the other, it is cheaper.  

September 2023, Pew Research Center published a detailed research about polls.  Turns out that on 25 variables for which subgroup-level benchmarks were available, the online "opt-in samples" averaged 11.2 percentage points of error for 18- to 29-year-olds and 10.8 points for Hispanic adults– each about 5 points higher than for U.S. adults overall.   

In the online "opt-in samples," an average of 8% of all adults, 15% of 18- to 29-year-olds and 19% of Hispanic adults answered “Yes” on at least 10 out of 16 Yes/No questions that were asked of every respondent. The corresponding shares on the "probability-based panels" were between 1% and 2% for each group. Similarly large discrepancies were reported on, among others, types of government benefits (Social Security, food stamps, unemployment compensation or workers’ compensation).  The Pew Ressearch concluded that the above consistent pattern of discrepancies suggests that much of the error on the online opt-in samples is due to respondents who either do no answer questions truthfully or answer thoughtlessly.

Indeed, it has long been known that while "opt-in," self-administered sampling do have advantages, including cost, timeliness, and convenience for respondents, they are not the right methodological fit for all studies. Studies related to political questions, domestic, or foreign in particular, would need "to maintain trends with long histories of interviewer-administered methodologies, such as the Gallup Poll Social Series (GPSS)" - as Gallup's own size admits. But because cost and time considerations, this is almost not done.

With the media endlessly, breathlessly reporting about constant polls - without once noting reservation of what the numbers may actually mean, or why the standard footnote of 3-4% margin of error being may be entirely meaningless (as much depends how and where were the "standard" 1,000 people self-selected to answer), perhaps it is time for journalists citing polls to actually be a bit knowledgeable about statistics. If this cannot be done, perhaps the media should require polling companies to publish their own confidence in their 3-4% error figure, and be held liable if the numbers are inaccurate, as they do not match their sampling method.  And be blunt about the strong limitation of the self-administered sampling method on many cases.

The best alternative would be for having betting markets deepen - subject to CFTC (as "betting on ideas" is not "gambling," but complements existing financial markets), the case being now debated in courts.  Putting money where your thoughts are carries far more credibility than answering polls - which has no personal consequences, people may answer  superficially or thoughtlessly.  They have nothing at stake. 

This piece is just a brief reminder of just that … Only putting money where your thoughts are produces information, though it can produce disinformation too - when there is monetary and political interest in such disinformation and when betting markets are shallow.  

The article draws on Brenner’s Force of Finance, and “Toward a New Bretton Woods Agreement” (2017). 


 



Comment
Show comments Hide Comments