FeaturedTechnology and InnovationThe Social Order

Apocalyptic Predictions About AI Aren’t Based in Reality


Silicon Valley is a place of warped expectations. Founders spin hyperbolic tales about how their companies will change the world. Facebook will bring global harmony. Bitcoin will replace the U.S. dollar. OpenAI will build our new god. Most institutional investors know to take the wildest claims with a grain of salt.

But some of these fantasies have taken on a life of their own, especially the idea that artificial intelligence is racing toward either a utopia or human extinction. A well-funded network of activists and nonprofits, known as the AI safety community, has turned this narrative into a policy agenda, pushing for sweeping regulations—such as licensing regimes and compute restrictions—based on speculative projections.

Finally, a reason to check your email.

Sign up for our free newsletter today.

In truth, the trajectory of AI development is far more mundane. Progress is slowing, not accelerating, and the greater risk may lie in regulating AI as if science fiction were science fact.

Consider, in this context, the AI safety community’s latest marketing project, AI 2027. The report, produced by the AI Futures Project—a nonprofit research group promoted by the AI safety community and funded by megadonor Jaan Tallin—is a bracing read, full of graphs and dashboards, and written in the style of a doomsday thriller. Their story reads more like confessions from a psychiatric ward than a realistic assessment of data. “What if AI undermines nuclear deterrence? What if it’s so skilled at cyberwarfare that a six-month AI lead is enough to render an opponent blind and defenseless? What if it could orchestrate propaganda campaigns that beat intelligence agencies at their own game? What if some AIs ‘go rogue?’” They’re “just asking questions.”

The problem is that the AI Futures Project is trying to sell the inevitability of these hyperbolic scenarios to the public in order to justify a heavy-handed regulatory agenda. The projections in AI 2027 rest on the assumption that progress in AI will keep compounding at a fixed or faster rate, without interruption. The report’s authors even suggest that, without global controls on AI research, AI systems could cause human extinction: “Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g., preppers in bunkers, sailors on submarines) are mopped up by drones.” It’s a forecast more grounded in severe mental illness than empirical reality.

This isn’t just an effort to drum up publicity. AI safety advocates have backed up their words with action, spending hundreds of millions of dollars on campaign donations, influencing the Biden Executive Order on AI, and supporting a failed attempt at a sweeping new regulatory agency in California.

The history of innovation offers reason to doubt the safetyists. Every major research field in history has eventually reached a point of diminishing returns, where useful discoveries become increasingly rare. The pattern holds across disciplines—physics, chemistry, and economics—and is visible at a more granular scale, too, from antibiotics to railroads to CPUs. Initial bursts of progress taper off as fields mature, following an economic pattern known as the “S-Curve.”

Despite gains in computing power and economies of scale, fundamental AI research is getting harder, not easier. The core problem is that the remaining unsolved challenges are growing more complex faster than our tools are improving. As economist Robin Hanson explains, “The degree of computing power in hardware and software for each task is distributed in a log normal way. . . . As computing power increases exponentially, you’re basically moving through that log normal distribution in a linear manner.” In other words, we’re progressing through an uneven mix of easier and harder problems and may now be entering the phase in which the remaining problems are the toughest ones. Preliminary evidence suggests that we may have already crossed that threshold—not that progress is completely over or has “hit a wall,” but that it is gradually slowing down.

The practical consequences of this slowdown are significant. Will Rinehart, an economist at the American Enterprise Institute, estimates that compliance costs for existing AI regulation already add up to $4 billion on the low end and $177 billion on the high end. These estimates do not include the extreme measures to restrict AI research proposed by the AI safety community. Current laws in California and Europe govern technology and data, not AI specifically.

In practice, these laws work as jobs programs for lawyers. Rinehart found that up to 1,140 compliance hours per business may be necessary for a single regulation, the BIS reporting rule. Dozens of these regulations, applying to tens of thousands of companies, compound into billion-dollar compliance industries. The regulations function as an annual tax of hundreds of thousands of dollars, if not millions. They raise fixed costs, preventing new startups from competing and benefiting incumbents.

Still, there are reasons for optimism. One is the emerging bipartisan shift toward a more evidence-based view of AI. The AI safety community suffered a major setback in California when Governor Gavin Newsom vetoed SB 1047, a bill that it had strongly backed. Democrats on the U.S. House Committee on Science, Space, and Technology criticized its proposed regulations as “without a sound evidentiary basis,” and then-Speaker Nancy Pelosi later endorsed this view. Bipartisan House and Senate reports are appropriately focused on a more balanced mix of AI opportunities and threats.

The United States is beginning to see through the wild exaggerations about AI, both utopian and apocalyptic. That’s a welcome development. Science fiction is no foundation for serious policymaking.

Photo: Ole_CNX / iStock / Getty Images Plus

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).


Source link

Related Posts

1 of 291