Every annual gathering of the technology glitterati at George Gilder’s COSM summit gives us a glimpse of a future that’s exciting for many, even if scary for some. It’s an optimistic vision of the future quite different from what is so often proposed by many myopic forecasters. The one thing though, that won’t be different in the future, regardless of the technology vision, is the central animating role of energy.
The physics of energy ties everything together. That’s also true for the themes at this year’s COSM summit: the unleashing of AI “into the wild,” the prospects for graphene as an entirely new and revolutionary class of material, and the role of China on the world stage.
It may be obvious but it’s worth stating, not just society but life as we know it, and indeed the very universe would not exist but for energy. Not to wax philosophical—but at a Gilder COSM it’s impossible to avoid—all possible futures happen at the intersection of the three core features of reality: information, atoms, and energy. As George has earlier said, the only thing different about our time and that of Neanderthals is what we know. The building blocks that make everything we have all existed back then. Today we ‘just’ have far more information about exactly the same atoms and forces that have always existed.
It’s in our ability to pursue an unlimited increase in the amount of information about how to rearrange nature’s atoms in unique and even magical ways that enables humanity to create all present and future products and services. But acquiring and processing that information, as well as rearranging the atoms using that information, always requires putting energy to work. Energy is consumed by every invention, product and, derivatively, service that makes life interesting, safe, convenient, enjoyable, even beautiful.
And, over all of history innovators have found vastly more ways to invent things that consume energy than to produce energy. The invention of materials like alloys, or polymers, or pharmaceuticals, or single-crystal silicon, led to new energy demands for their fabrication. Similarly the invention of machines built from those materials, like the car, or airplane, or the computer, led to new energy demands.
The ineluctable energetics of nature are epitomized by the fact that our information machines themselves have become monstrously energy hungry. All software, even virtual reality itself, requires the reality of electricity-consuming machines to instantiate logic. That may sound self-evident, but the consequence of that fact means that today, roughly speaking, the global cloud uses as much energy as global aviation. And the former is growing far faster than the latter.
Which brings us to the AI at the epicenter of this Summit– an entirely new way to put silicon engines to work. While AI has been around for quite a while, November 30, 2022, when ChatGPT went live to the public, will be remembered as the date that AI was released into the wild.
AI is the most energy-intensive use of silicon ever devised. It’s analogous, in energy terms to going from the age of steam ships to jet aircraft. The latter unleashed global personal travel because it was so much better, more convenient, and thus more productive which is to say, it preserved the most precious commodity in the universe, our time. Of course, flying is a dramatically more energy intense way to move anything. So too AI.
AI of course entails both the training, or so-called machine learning phase, and then the inference phase, which is using the learned knowledge. Both training and inference consume quantities of energy that are scandalous, at least for those obsessed with reducing society’s energy use. For example, just one, trivial, machine-learning algorithm that, a few years ago was trained to solve one puzzle, Rubik’s cube, consumed enough electricity to drive a Tesla a million miles. And training, for many real-world tasks, is not a one-time occurrence. Then, once trained, comes the inference phase which, while operationally less energy-intensive than training, is performed far more often, sometimes continuously, and leads to more total energy used than in the training. Think of that as the equivalent of comparing the energy needed to fabricate aluminum and build an aircraft and then the fuel needed to fly it.
The full energy-consuming impact of AI has yet to be seen because it’s early days for the software and hardware. We’re at a time equivalent to conventional computing, say, circa 1980. The dystopians fear AI, but it’s an exciting and consequential new tool, one that will unleash all manner of innovations, not just self-driving cars and robots, but also both greater productivity and new means of basic discovery, as well as many things unimagined. And the infrastructure that will be built and powered to democratize AI and to make it useful will eat the grid, to adopt a phrase from Andreessen Horowitz.
At a recent gathering of electric utility executives, Elon Musk gently chastised them for under-estimating the magnitude of electricity demand coming. He wasn’t talking about powering EVs, but, in the main, powering AI. For context, today’s global cloud consumes 10-fold more electricity than all the world’s EVs combined. Even if EV adoption expands at the rate that the bulls think, the cloud will still outpace that electricity demand, by a lot, especially now that AI hardware is being added rapidly to the Cloud infrastructure.
The standard rejoinder to observations about the energy appetite of computers, and now especially AI, is to assert that innovators will make silicon technology more efficient. Of course they will. But efficiency doesn’t slow energy demand growth, it propels it. That reality is called Jevons Paradox. Information systems in general are the most deliciously clear example of that so-called paradox.
Consider that over the past 60 years, the energy efficiency of logic engines has improved by over a billionfold. And that’s precisely why there are billions of smartphones and thousands of warehouse-scale datacenters today. At the energy-efficiency of computing of the 1980s, one smartphone would use more electricity than the building we’re in right now, and a single datacenter today would require the entire US grid. In other words, if it were not for the astonishing gains in compute energy efficiency, neither the smartphone era nor the cloud would have happened.
Now comes something truly remarkable, discovered only two decades ago by a happy accident; an entirely new, game-changing class of material called graphene, a material of nearly unimaginably thin sheets of pure carbon just a few layers of atoms thick, possessing seemingly magical properties. Graphene is beginning to see application in a handful of commercial products. One possibility would use graphene as a radically more efficient base material to replace silicon for computer chips. Count on that to accelerate Jevons Paradox.
Graphene has a suite of other surreal properties relevant to structures and to biology. In one specific form it vastly stronger than steel. In another formulation, it’s biocompatible and can facilitate the former impossibility of the regrowth of nerves. And graphene is only one, though possibly the most remarkable of an array of new classes of materials emerging from research labs.
But back to our theme: fabricating all materials entails energy. Compared to the centuries that predate modernity, nearly everything was built from a small suite of materials, mainly stone, wood, and animal parts. The materials of our age require, on average, more than an order of magnitude more energy per kilogram to produce. Move from wood to polymers—the latter ubiquitous in medical domains and far more useful than wood there—and there’s a 10-fold increase in the energy cost per kilogram produced. Use aluminum instead of polymers and there’s another 10-fold rise in energy cost per kilogram to produce. And semiconductor-grade silicon is 30-fold more energy intense than aluminum. A kilogram of silicon takes over 100 times more energy to produce than a kilogram of steel. And the world produces silicon by the kiloton—in energy terms, equal to producing megatons of steel—not just for computer chips but also solar cells.
As for fabricating graphene, we’re in the early days of figuring out how to produce it at scale. George Gilder has proposed that graphene fabrication may be in sight of an “aluminum moment,” the point in history back in 1886, when innovators found a feasible high-volume means to affordably produce what was then an intriguing but impossibly expensive material. Pure aluminum prior to that was more expensive than gold.
But, as the technical literature shows, the energetics of producing graphene are more similar to silicon than aluminum. So, I’d propose that graphene is on the cusp of, not of an “aluminum moment,” but a Czochralski moment. The polish metallurgist Jan Czochralski in 1916, discovered, by accident, how to fabricate single crystal silicon from a molten vat. That discovery led directly to the commercial process used today, perfected by Bell Labs in 1949, 33 years after silicon’s accidental discovery. Without single-crystal silicon, there’d be no silicon computer age. If graphene takes the same amount of time from accidental discovery to a viable commercial process, we’ve another decade to wait. But today, in a virtuous circle of materials, machines and information, AI-infused supercomputers of our era will likely help radically compress that timeline.
The company, and the country, that first gets to commercially viable graphene at scale will have some real advantages. Which brings us to the energy relevance of China, the third of the three themes at COSM 2023.
Consider the state of affairs for a variety of the key materials that are energy-intensive to produce and that are, collaterally, critical to both energy-producing and energy-using machines.
China produces over 60% of the world’s aluminum, refines over half of the world’s copper—the element that is the keystone of 90% of all things electrical—and 90% of the world’s refined rare earth elements vital for many electric motors or generators, and irreplaceable in many high-tech applications including solar cells and wind generators, 90% of the globes refined gallium, the element that makes possible the magical semiconductor gallium-arsenide used to make many tech things, not least lasers and LEDs; and 60% of the world’s refined lithium, 80% of the world’s refined graphite that is used in all lithium batteries, and 50% to 90% of many key chemical formulations and polymer parts needed to fabricate lithium batteries. There’s more; but you get the point.
China is not afraid of energy-intensive materials-fabricating industries and two decades ago set out to become a dominant supplier of such. That leadership emerged at the intersection of three kinds of policies, those that encourage and provide incentives for engineers to study old-fashioned basic chemical, electrical, and material sciences that are, at best, second or third tier priorities here, and second, policies that favor and accelerate—rather than oppose and impede, as we do in the US—the ability for industries to build massive chemical-intensive and energy-intensive facilities, and then third, policies that ensure a reliable supply of low-cost energy to power those industrial facilities. The latter, in China, means a grid that is two-thirds coal-fired.
Now comes the U.S. with its Inflation Reduction Act that constitutes the biggest package of spending on industrial policy in America’s history. There’s no secret as to the specific purpose of most of the spending from the IRA – it’s in service of reducing this nation’s emissions of carbon dioxide by stimulating an energy transition away from the use of hydrocarbons. Regardless of what you think you know about climate change and carbon dioxide, there are two sets of facts to keep in mind at the intersection of technology, policy, and energy.
The first fact set:
The roughly $2 trillion yet to be deployed by the IRA will, if the government’s forecasts are right, reduce U.S. CO2 emissions by 1 gigaton a year. Set aside the inflationary nature of that spending, relevant to that theoretical emissions outcome, and as a practical matter, China is still building yet more coal plants and President Xi has made it clear those will be built. That means China will continue to enjoy an industrial energy cost advantage for energy-intensive industries for decades yet. It also means that the completion of just those additional coal plants will increase that nation’s already highest-in-the-world CO2 emissions by another nearly 2 gigatons year. Meanwhile, to theoretically eliminate 1 gigaton here, a big share of US taxpayers’ $2 trillion will be used to buy from China the critical energy materials needed to build the wind, solar, and battery hardware envisioned by the IRA goals.
And the second fact set to keep in mind:
Since China and energy-materials are unavoidably at the epicenter of energy transition goals, it bears noting where the world is today, after two decades and at least $5 trillion of global spending on wind and solar and similar efforts to avoid the hydrocarbons, coal, oil, natural gas.
That spending did decrease the share of energy supplied by hydrocarbons, but by just two percentage points. Today hydrocarbons still supply 82% of global energy. And the combined contribution from solar and wind hardware today supplies under 4% of global energy. For context: burning wood still supplies 10% of global energy. Meanwhile, the absolute quantity, not share, of hydrocarbons consumed by the world over the past two decades has increased by an amount, in energy-equivalent terms, to adding six Saudi Arabia’s worth of oil output.
That “energy transition” spending has yielded anemic outcomes so far is self-evident in the data. That we could, and likely will spend a lot more to deploy non-hydrocarbon energy machines is self-evident in our politics. But one cannot avoid either the physics of the energy-materials, or the fact of China as a dominant supplier of those materials.
As the COSM 2023 Summit has made clear, there are some truly astounding, transformational innovations emerging in both computer and materials domains. But those revolutions epitomize, again, the proliferation of more innovations that consume energy, not those that produce energy.
Consequential changes in the nature and scale of machines that produce energy await still unknowable breakthroughs, epiphanies, or future accidental discoveries. Such progress is inevitable, but, to quote Bill Gates, that kind of revolutionary progress has “no predictor function.”
But one can predict, with confidence, that the AI infrastructure will proliferate, and that someone will someday soon figure out how to produce graphene at scale. One might also predict—perhaps it’s more of a hope—some semblance of sanity returning to industrial policy domains given the state of geopolitics.