It is easy to take for granted that the three-piece seatbelt, now standard in all automobiles, was invented as recently as 1959. Even more remarkable is the fact that this innovation occurred some five decades after Henry Ford invented the Model T, America's first mass-produced automobile. Although it might seem strange today, cars did exist without seatbelts.
It is estimated that the modern seatbelt, designed by Volvo engineer Nils Bohlin, has saved over one million lives. Not surprisingly, it is significantly safer to travel by car today as a result of his invention. Following a similar pattern, innovation in air travel has made flights so safe as to almost completely render commercial airline crashes obsolete. This is a remarkable tale of human progress, but it is important to remember that it didn't all happen overnight.
This brings us to the topic of AI. There is an ongoing effort among some alarmists, both in the Biden administration, as exemplified by its recent executive order, and in the private sector, as exemplified by the recent firing of Sam Altman, to attempt to regulate away perceived "risk" in this space.
The paradox here is that the government cannot make AI any safer than it could the automobile. In the same way that the government could never have dreamed up the Model T, they never could have imagined the three-piece seatbelt either. One supposes that if the AI doomsayers were alive back in 1908, they would be pushing for the government to ban cars "until they are safe."
This leads to the realization that new technology inherently comes with risk. However, through trial and error, humans progress, and safety improves. Those expecting the government to eliminate any risk associated with the development of AI are attributing to it a level of wisdom it simply does not have.
What the government can do, however, is slow down innovation that will create AI's figurative "seatbelt." The Biden administration's recent executive order all but guarantees this outcome by empowering federal bureaucracies to write rules of the road, forcing companies to devote time and money to placating regulators' perceived risks instead of pursuing novel solutions that actually improve safety.
Of course, the ultimate tragedy we risk by heeding those who wish to slam the brakes on AI is the potential loss of countless lives due to the prolonged development of this technology. While federal regulators make six-figure salaries dreaming up "what ifs" they need to protect us from, AI has the potential to aid in the development of new medical treatments, improve transportation safety and efficiency, and much more. AI will quite literally be a life saver, and on a scale that vastly dwarfs the seatbelt by comparison.
This is the future that policymakers should focus on. In the meantime, embrace the perceived "risk" that is inevitable with any new technology. The juice is worth the squeeze — don't fear human progress.