FeaturedPublic SafetyTechnology and Innovation

Criminals Are Outpacing the Police in the Tech Arms Race


On New Year’s morning, American citizens awoke to news of another mass-casualty terror attack. Using a rented a pick-up truck, a man plowed through revellers on New Orleans’s Bourbon Street, killing 14 and wounding at least 57 others.

This was no crude act of vehicular terrorism: the attacker employed smart glasses to reconnoiter the area before striking. He had wired his vehicle with improvised explosive devices (IEDs), and strategically planted additional bombs across the neighborhood. The materials used were not black-market contraband but commercially available technology, repurposed for destruction.

×

Finally, a reason to check your email.

Sign up for our free newsletter today.

The New Orleans attack exemplifies a disturbing trend: the increasing weaponization of everyday technologies. Law-enforcement agencies, historically reactive, are struggling to keep pace with these new threats. To avoid falling behind, they must embrace foresight, reform their decision-making processes, and build new partnerships with researchers and tech firms.

Criminal ingenuity thrives on adaptation, turning benign technological advancements into tools of subversion. This phenomenon—what scholars call “malevolent innovation,” or “the use of creativity for destructive ends”—is not new. From the fertilizer bombs Timothy McVeigh used in Oklahoma City to the pressure-cooker IEDs of the Boston Marathon bombing, widely available resources can be harnessed for terror.

Yet, today’s technological landscape offers an unprecedented level of access and lethality. Consider generative artificial intelligence (AI), a field advancing at breakneck speed. By 2025, estimates suggest that 77 percent of personal devices will incorporate AI in some form, with annual market growth exceeding 120 percent for the foreseeable future.

These innovations—designed to boost productivity—are now available to anyone with an Internet connection. AI use cases span content creation, automation, and countless business and industrial applications, limited only by a user’s imagination. As such, AI is a disruptive technology that shakes up the marketplace by creating new consumers and launching radically new services and trends.

But the same technology that can revolutionize industries can also empower bad actors in ways that defy traditional law-enforcement strategies. A simple prompt to ChatGPT, Gemini, Claude, or another large-language model can generate hyper-realistic phishing emails, fabricate deepfake images for blackmail or political sabotage, or even program autonomous drones for illicit surveillance or chemical dispersal. Jailbroken AIs can even help create chemical weapons.

AI makes these criminal tools widely available. As American University’s Audrey Cronin warns in her book Power to the People, “disruptive technology such as AI can often democratize access to lethal capabilities, shifting unprecedented power into the hands of civilians.”

Cronin’s “Lethal Empowerment Theory” identifies 12 characteristics that make technologies especially vulnerable to criminal exploitation: accessibility, affordability, ease of use, portability, concealability, effectiveness, multipurpose functionality, open-market availability, adaptability for unintended uses, integration with other technologies, symbolic resonance, and lack of regulatory oversight. The New Orleans attacker’s method—a combination of vehicular assault, reconnaissance via smart glasses, and IED deployment—embodies this model. If a lone individual can inflict such damage using readily available tools and evade detection, what might a coordinated, motivated group be capable of?

The response from public-safety institutions to these threats remains sluggish. Law-enforcement agencies, traditionally conservative in approach, struggle to embrace innovation. Studies confirm that policing institutions are often resistant to change, burdened by bureaucratic inertia and slow adoption cycles. This, combined with a lack of motivation, capabilities, and resources to fund new police units to combat emerging threats, means that technology is empowering criminals faster than law enforcement.

So, what can be done? To counteract malevolent innovation, policing must become anticipatory rather than reactionary. Foresight exercises—simulating future crime trends—should become standard practice. Law enforcement must forge closer partnerships with tech firms, to mitigate emerging threats; and with researchers, to predict, forecast, and identify practices and policies to address them. Legislative bodies must consider proactive regulation of technologies prone to misuse. And crucially, police forces must accelerate their decision-making processes.

Eradicating crime and terror is an unrealistic goal, but mitigating them is not. The challenge lies in shedding institutional inertia and recognizing that innovation is neither inherently good nor inherently bad—only the application matters.

If policing remains reactive, society will continue to be blindsided by malevolent innovation. But if law enforcement evolves alongside the technologies it seeks to regulate, it may yet gain the edge in this ever-escalating arms race. With a greater awareness of the dark side of emerging technology and a commitment by public-safety professionals to adapt, we can inoculate society against the worst scenarios that bad actors can dream up.

Photo by MATTHEW HINTON/AFP via Getty Images

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).


Source link

Related Posts

1 of 301