For Investors Trying to Price Risk, AI Polling Is Long Overdue
AP
X
Story Stream
recent articles

In August 1955, Isaac Asimov published a short story called “Franchise” that described the 2008 presidential election. By then, a supercomputer called Multivac had made actual voting obsolete. Instead of polling millions of citizens, Multivac selected one perfectly representative American — Norman Muller, a department store clerk from Bloomington, Indiana — interrogated him about the price of eggs and other seemingly trivial matters while monitoring his physiological responses, and computed the national result. Norman never voted for any candidate. He didn’t need to. The machine already knew.

Asimov was inspired by a real event: UNIVAC’s correct prediction of the 1952 presidential election from early returns, which stunned a television audience expecting a long night of counting. The logical endpoint of that capability, Asimov thought, was a computer that needed only one data point. His story is usually read as a warning. What it actually is, I think, is a prophecy that arrived early and in the wrong form.

The current version of this drama involves companies with names like Aaru, which recently achieved a $1 billion valuation by doing what they call synthetic sampling: using large language models to simulate how human respondents would answer survey questions. Give the AI a demographic profile — white, college-educated, $70,000 a year, Utah — and ask it how that person would vote. Repeat a few thousand times. Skip the actual humans entirely. Critics, including competing pollsters and forecasters, have pointed out that these AI “polls” are not polls at all. They collect no new data. They are models predicting what a poll would show, not measurements of what people think.

That criticism is correct. It is also, in the most important application of this technology, beside the point.

The straw man in the room

The conventional critique of AI polling implicitly assumes that the purpose of opinion research is to forecast election results, and that traditional polls are the gold standard against which synthetic sampling must be measured. Neither assumption holds up.

Consider what polls are actually measuring. Response rates for telephone polls have collapsed from around 35% in the 1990s to roughly 6% today. The people who answer are not a random sample of the population; they are a specific kind of person who answers unknown phone numbers and stays on the line for fifteen minutes. Pollsters compensate with increasingly heroic weighting assumptions. The result is a model masquerading as measurement — useful, often insightful, but nowhere near the direct readout of public opinion it is taken to be.

Elections are worse. We treat them as the definitive expression of the popular will, but they are a 19th-century technology for aggregating preferences, and a remarkably imperfect one. The secret ballot wasn’t adopted in Australia until 1856, Britain until 1872, or across American states until the 1880s and 1890s. Before that, voting was typically oral, public, and easily purchased or coerced. In the United States, anything approaching universal suffrage in practice did not arrive until the Voting Rights Act of 1965 — ten years or more after Asimov’s story was written, and longer after that in enforcement.

When critics of synthetic polling ask “but is it as accurate as a real election?” they are implicitly assuming that real elections reliably capture the will of the people. The left has always argued they don’t — suppression, disinformation, false consciousness. The right has always argued they don’t — silent majorities, coastal elites, rigged counts. They are not both entirely wrong.

The spokespeople problem

There is a domestic version of this same epistemic failure, and it is pervasive. Democratic societies are full of people who claim to speak for constituencies far larger than their actual support. Spokespeople for “the people,” “the proletariat,” “Real Americans,” or “the moral majority” have a way of polling in the single digits when they appear on actual ballots. Advocates who claim to represent LGBTQ+ Americans, people of color, veterans, union members, or teachers routinely find, when independent polls are conducted, that their positions command nothing like the unanimous support within their claimed constituency that their rhetoric implies. The gap between the spokesperson and the spoken-for is one of the most reliable features of democratic politics, and it is almost never honestly acknowledged by the spokesperson.

This is not primarily a problem of bad faith, though bad faith is sometimes present. It is a problem of incentives. In the absence of objective information about what a constituency actually thinks, anyone with a platform and a plausible identity claim can fill the vacuum. The incentive is to assert that the group thinks what you think, or what your donors think, or what your theory of history says they ought to think. The assertion costs nothing and is almost impossible to disprove to an audience already sympathetic to the speaker.

AI polls are nowhere near perfect. They have their own biases and blind spots, built from training data that reflects historical patterns and, in some cases, the assumptions of the engineers who built the underlying models. A synthetic sample of “union members” will reflect what union members have said and done in contexts that were observed and recorded — which may not capture what they actually think about a novel question today. These are real limitations, and anyone using AI polling results should weigh them seriously.

But AI polls are not deliberately constructed to fit a pre-existing narrative. They are not funded by advocacy organizations with a stake in the answer. They cannot be satisfied by selecting a convenient panel of sympathetic voices and presenting them as representative. The same cannot be said for the alternative — which, in most cases, is not a rigorous independent poll but simply the loudest available claim about what some group believes.

The comparison that matters

Here is where I part ways with the conventional critique of AI polling. The question is not whether AI polls accurately predict election results in places where we already hold high-quality elections and conduct thousands of polls. That is like asking whether Copernican astronomy accurately predicted Ptolemaic epicycles. The point of a better framework is not to reproduce the outputs of the old one.

In the United States and other advanced democracies, opinion data is genuinely abundant. We have exit polls, favorability trackers, prediction markets, careful academic surveys, and — at the end — actual election returns, all concentrated on the same narrow set of questions about the same relatively small set of offices. If you want to know what American voters think about a presidential candidate or a major ballot initiative, you are not operating in an information vacuum. AI-generated synthetic responses and add value at the edges—especially for constituencies that are hard to reach or even to define—but critics are right to be skeptical of claims AI polls will make other opinion measurements and election forecasts, much less elections themselves, obsolete.

The more important application of AI polling is elsewhere, in places where the information vacuum is real: countries that do not hold fair elections, or any elections at all.

Before the United States and Israel struck Iran, a very relevant question was whether the Iranian people would vote for regime change if given the chance. We had no reliable way to answer that. Polling inside authoritarian states is possible but constrained by fear of the regime, limited in reach, and subject to strategic misrepresentation by respondents who have learned that honest answers can be dangerous. AI systems trained on Iranian social media, diaspora communications, samizdat-style internet traffic, consumer signals, and carefully designed sampling of the reachable population could provide an estimate — imperfect, but better than nothing. And “nothing” was the previous answer.

The same logic applies to dozens of other countries where the official election result tells you everything about who controls the counting and nothing about what the people want. North Korea holds elections. So does Belarus. So did Venezuela. So does Russia, where Vladimir Putin reportedly received 87% of the vote in March 2024. These numbers are measurements of regime control, not public preference. For these populations — for the hundreds of millions of people whose genuine political preferences are unknown and perhaps unknowable through any conventional survey method — synthetic AI polling is not a competitor to the existing toolkit. It is the only toolkit.

The market dimension

This matters to investors in ways that are underappreciated. Geopolitical risk assessment has always been hampered by the same information deficit. Country risk models rely on official statistics, expert surveys, and qualitative political assessments — all of which share the same weakness when the subject is a country whose government controls information flows. The question of whether a regime is stable, whether a population would support or resist a transition, whether a successor government would honor contracts — these are fundamentally questions about individual human preferences in aggregate, and we have historically had almost no way to answer them.

It is worth being precise about what makes AI polling different from other uses of LLMs in political risk analysis. The latter typically work top-down: feed aggregate data — news flows, social media sentiment, trade statistics — into a model and forecast where things are heading. That is useful, but it is the logic of trend extrapolation, not democratic legitimacy. It answers the question a technocrat asks, not the question a democrat asks.

AI polling, done properly, works bottom-up. It simulates individual people — their circumstances, constraints, and preferences — and aggregates the results. This is structurally identical to what an election does. The franchise is not a measurement of social trends; it is a summation of individual choices. A synthetic poll that models each person and counts the results derives its legitimacy from the same source an election does, even if the mechanism is different. One might call it synthetic democratic legitimacy — and in places where no real democratic process is available, synthetic is far better than nothing.

The difference is not merely philosophical. Top-down LLM analysis of authoritarian states will tend to reflect whatever signals the regime allows to surface, weighted by their visibility in the training data. Bottom-up individual simulation, grounded in demographic and behavioral profiles, is harder to manipulate at the source. A government can suppress a protest; it cannot easily falsify the ten thousand small decisions that reveal what its population actually wants.

What Asimov got right

Beneath the surface of his dystopian framing, Asimov understood something important. Multivac was not trying to predict elections. Multivac was trying to find the genuine will of the people — efficiently, rigorously, without the corruption and exclusion that characterized every actual election of his era. The machine asked Norman Muller about the price of eggs not because the answer was important but because his reaction, his hesitation, his physiological response, were data about who he was and what he wanted.

Human survey responses are useful data — probably still the best data we have for many purposes — but they are inputs to a model, not a direct readout of preference. Humans confabulate. They give socially desirable answers. They don’t know their own preferences on complex tradeoffs. A single question about a single candidate captures almost nothing about what a person actually wants from their government. AI, fed not just survey responses but behavioral data, purchasing patterns, social media, and hundreds of other signals, is not trying to predict what someone would say if asked. It is trying to model what they actually prefer. Those are very different problems, and the second is the right one.

Norman Muller left the Multivac facility feeling proud, the story tells us, that through him, Americans had “exercised once again their free, untrammeled franchise.” The irony is that he never voted for anything. Maybe that was the point. The franchise — the right to have your preferences count — doesn’t require the specific 19th-century machinery we’ve inherited. It requires that someone, or something, is actually trying to find out what you think.

We are early in building that something. AI polls applied to American presidential elections are solving a problem that, for all their flaws, traditional polls and prediction markets already handle reasonably well. But applied to the populations of authoritarian states, ungoverned territories, and places where honest preferences cannot be safely expressed — there, AI polling is not a parlor trick. It is asking the right question for the first time. And for investors trying to price political risk in those markets, the first time is long overdue.

Aaron Brown is the author of many books, including The Poker Face of Wall Street.  He's a long-time risk manager in the hedge fund space.  


Comment
Show comments Hide Comments