When it comes to skiing, if you’re not falling you’e not progressing. Whether snow or water, falling is your friend. It’s a sign you’re learning while taking risks.
Business is no different. That’s why investors rush away from the ones resting on their laurels, or otherwise remaining in the stationary state. What’s in a holding pattern born of a lack of risk-taking will soon enough be obsolete.
This is something to keep in mind with the tens of billions invested in AI concepts in 2023 alone well in mind. The vast majority of those investments will go belly up. More realistically, belly-up is the business model of venture capitalists. They enter all-new fields well aware that most of what they commit capital to will die a quick death, only for the very successful investments to more than pay for all the mistaken ones. It’s no insight to say that in technology investing, a lack of wipeouts is the surest sign that your investing style is way too conservative to author any kind of substantial progress.
It’s worth remembering as Stanford’s Fei-Fei Li and John Etchemendy describe 2023 in the Wall Street Journal as “the year that Congress failed to act” on the rise of Artificial Intelligence. Li and Etchemendy aren’t alone. Subsequent to their Journal piece Burkean conservative Yuval Levin penned an op-ed for the same publication which asserted that “Establishing frameworks for AI policy is important.” Ok, but quoting Levin quoting one of technologist Jim Manzi’s long ago professors at MIT, “If it works, it’s not AI.”
While it’s certainly true that 100 people would read Manzi’s line 100 different ways, it’s no reach to say that while “AI actually works” (Levin) now, no one really knows what it will be, or what it will accomplish. How we know this is can be found in the tens of billions that found and continue to find their way to Silicon Valley. These capital commitments aren’t a signal that the future is clear, but that it isn’t. And because it isn’t, it’s useful to offer a parallel to the Manzi quote: “If it’s an AI investment, it won’t work.”
This isn’t meant as an AI pejorative. Quite the opposite. Anything that will do for humans and think for humans is intensely laudable. Think of it this way: the discovery of coal as a source of power was said to be the equivalent of giving every working human 10-20 full-time assistants in a productivity sense. In which case imagine how productive and rich we humans will be if increasingly sophisticated machines are doing and thinking for us.
At the same time, what’s transformative or has the potential to be transformative is logically going to be defined by stupendous failure at beginning, middle, and end. As Levin puts it, “The potential of such technology is immense.” But cautious conservative that he is, Levin keeps attaching caution to his optimism, caution that sadly involves more government. In his words, the “regulatory challenge” of AI “should suggest an approach that begins with what regulators know, not what they don’t.” Ok, but if investors are much less than sure of what’s ahead (see once again all the investment in search of tangible knowledge that doesn’t yet exist), what could regulators well outside the proverbial arena know, and what could they add?
Sadly, Li and Etchemendy make Levin appear anarchistic by comparison. It’s not just that they want “deep dialogue and partnership” with Washington, D.C., they oddly lament a “growing gap in the capabilities” between profit-motivated AI activity and government, and more comically, they’re concerned that “academia and the public sector lack the computing power and the resources necessary to achieve cutting-edge breakthroughs in the application of AI.” Not explained by Li and Etchemendy is what “deep dialogue” about what is an unknowable future could achieve, not to mention that any individual with ambitions about shaping an unknowable future wouldn’t pursue them inside an entity (government) constrained by the known.
Indeed, it’s too easily forgotten that governments are conservative not in an ideological sense, but in the sense that their vision is generally limited to what is already visible. Yet as the belly-up future of the vast majority of AI investment taking place now reminds us, and should remind Li and Etchemendy, the successful capital commitments of the AI variety will invent a future that something north of 99.99999% of us can’t see, which means it’s directed at the impossible. The problem is that the impossible is frequently rendered ridiculous, and that’s bad for optics in politics. Not in technology, which is explains the capability disparities.
Which is why Levin’s argument is arguably less compelling than Li and Etchemendy’s is. That the founders of the Stanford Institute for Human-Centered Artificial Intelligence are literally calling for a government-engineered AI “moonshot” means they’re both so far removed from reason that they arguably can’t do any damage. In Levin’s case, while he keeps acknowledging in various ways that it would be a “fool’s game” to create a government agency empowered to shape the AI future, the two-handed policy thinker from AEI continues to use his other hand to make a case for “existing” government agencies to do what a newly created government agency can’t. That’s a mistake.
The simple truth is that we don’t know, but we do know that actual market forces will ruthlessly choose what makes sense and what doesn’t. Importantly, neither Levin, Li and Etchemendy, nor the investors with billions worth of skin in the game really know what’s ahead.
All we know is that if investors can’t expertly divine the future, think tank types and government regulators certainly can’t, nor can they establish “frameworks” for what they surely don’t get. Which means the only real answer is the one that opinion writers like Li et al loathe, which is do nothing. How about we – for once – just give anarchy a chance?