President Trump’s historic dismantling of the Iranian regime is unfolding at blinding speed, showcasing the next generation of AI warfare. Far from replacing human judgment, the US military’s use of AI in Iran has focused in large measure on solving one of war’s oldest and most vexing problems: processing vast troves of intelligence. AI is helping commanders sharpen target selection, sift intercepted communications, conduct battle-damage assessments and shorten the time needed to identify and eliminate terrorist targets, all while reducing collateral damage.

In the exclusive excerpt below from the new book “Code Red: The Left, the Right, China, and the Race to Control AI” (HarperCollins), author Wynton Hall reveals how AI warfare and autonomous weapons are strengthening America’s ability to achieve peace through strength in ways that are reshaping warfare in the AI era.

In March 2020, on a Libyan battlefield, civilization may have crossed an ominous threshold. Turkish-made autonomous drones reportedly “hunted down and ​. . . ​engaged” retreating forces loyal to General Khalifa Haftar with no human guidance.

According to a UN-commissioned report, those lethal autonomous weapons were “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.” 

That was no theoretical scenario devised by military analysts or ethicists. Nor was it a scene from a Hollywood sci-fi thriller about rogue killer robots. It was a real occurrence, one in which machines selected and engaged human targets independently.

The weapon in question was not some shoddy hobby drone with a duct-taped camera; it was the Kargu-2, a quadcopter loitering munition manufactured by the Turkish defense firm STM. Kargu-2 supports multiple warhead configurations, offering precision strikes via autonomous navigation and flight control. It also features an automatic target recognition system with day-and-night capabilities.

In the words of West Point researchers, it is “designed to be an anti-personnel weapon capable of selecting and engaging human targets based on machine-learning object classification.” 

According to STM’s CEO, Murat İkinci, the Kargu-2 is equipped with facial recognition technology and can operate in swarms of up to twenty for coordinated attacks.

While it remains unclear whether the Libya engagement claimed any lives, drone warfare expert Zachary Kallenborn, writing in the Bulletin of the Atomic Scientists, suggested that the UN report “heavily implies” that it did. If so, he said, it marks “a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence.” 

If the Libyan case offered a glimpse of autonomous warfare’s potential, Israel’s response following the October 7, 2023, mass slaughter by Hamas of 1,200 innocents demonstrated the real-world, near-future capabilities of AI on the battlefield. The Israel Defense Forces (IDF) deployed three AI systems — The Gospel, Lavender, and Where’s Daddy? — that collectively identified terrorist targets for expedited elimination.

The Gospel assembled lists of likely terrorist buildings. Lavender sifted a mountain of surveillance data, such as images and phone records, to build and rank the kill list. The menacingly named Where’s Daddy? used cellphone signals to track enemies to their homes as a way to confirm their identity before aerial strikes pulverized them.

Together, these three AI systems dramatically accelerated target acquisition and kill chain protocols. As former IDF legal adviser Tal Mimran put it, previously “you needed a team of around 20 intelligence officers to work for around 250 days to gather something between 200 to 250 targets. Today, the AI will do that in a week.” 

This reality draws a line under another fundamental disconnect between the Left and the Right. Because the Left leans toward materialism and utopianism, left-wingers often assume that conflicts are best resolved by communication, harmony, and disarmament.

If we all just try a little harder, we can make Heaven on Earth. If there ever is real evil, the Left tends to think that it’s our own fault.

The Right assumes the opposite, maintaining a constant skepticism toward powerful people because right-wingers believe that evil exists and won’t be resolved by human means. Despite this, we also think that America represents the good guys. Because we do.

Throughout this chapter we’ll see how the Left’s constant instinct to see us as the bad guys and downplay enemy threats could hobble American readiness in the coming world of AI-empowered terrorism.

THE NEW AI BATTLEFIELD: CONTEXT AND STAKES

The AI revolution’s all-encompassing effects will reach beyond everyday concerns such as jobs and education; it will shape how the United States wages war and maintains its national security.

The United States has always depended on cutting-edge military technology to defeat adversaries and defend its citizens. As weapons evolve, so must our military and intelligence operations, both to beat hostile nations armed with AI weaponry and to build next-generation systems that will strengthen American superiority on the battlefields of the future.

Recent AI spending increases underscore the urgency. In a single year, federal AI-related contracts rocketed by nearly 1,200 percent, from $355 million in 2022 to $4.6 billion in 2023.

The spike was overwhelmingly driven by increased Department of Defense (DOD) spending. Pentagon AI contracts alone more than doubled to over $550 million in the same period.

This surge doesn’t mean that traditional weaponry such as tanks, fighter jets, and naval destroyers will be scrapped. Rather, it reveals how AI is being integrated into current and future defense programs to maintain battlefield dominance. As the Israel example reveals, some uses of AI and machine learning are focused on helping soldiers quickly sift through massive amounts of data and information to find intelligence needles in the proverbial haystack. Other uses directly pertain to autonomous weaponry. With AI’s rapid adoption around the world, US defense planners know that our enemies are gaining access to deadly AI weapons and surveillance systems. And as with most technologies, the financial costs continue to drop, giving rogue nations and terrorists increasingly affordable and unprecedented lethality.

Take the Bullfrog, for instance, an AI-enabled autonomous robotic gun system with a 7.62-mm M240 machine gun mounted on a smart turret.

The AI machine gun provides small-arms firepower against drone targets superior to that of the average service member. Another benefit: its relatively low cost. Yet its affordability means that similar AI systems will increasingly fall into the wrong hands.

Bottom line: The democratization of lethal AI weaponry means that technology that was once the exclusive domain of superpowers will increasingly be available to a host of actors, both state and nonstate.

Leaders on both sides of the aisle seem to grasp the gravity of the moment. Senator Mark Warner (D-VA) warned that the proliferation of such technologies has “dramatically lowered the barrier of entry for foreign governments to apply these tools to their own military and intelligence domains.” Even more concerning, he notes, many of these AI innovations are developed and released by American companies, only to be repurposed by foreign militaries and intelligence services. Reverse engineering of US weaponry is hardly new. But as AI-powered systems become cheaper and more deadly, the cost in human carnage could rise significantly.

This reality means that AI will dramatically affect how we defend our nation and how we fight and win wars. Vladimir Putin, hardly a friend of US interests, openly declared, “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” 

Conservatives have always understood that peace comes through strength, not weakness. From President Ronald Reagan to Trump, this principle has guided national security policy for a reason: it works.

As Reagan put it, “We know only too well that war comes not when the forces of freedom are strong, but when they are weak. It is then that tyrants are tempted.”

This wisdom applies perfectly to the AI threat vectors we now face. As former UK prime minister Margaret Thatcher reminded the world in her eulogy for Reagan, the Gipper’s decision to rebuild the US military gave our nation the technological superiority required to win the Cold War “without firing a shot.”

President Trump similarly emphasized military strength as the pathway to peace. As he said in his first farewell address, “I am especially proud to be the first president in decades who has started no new wars.”

The lesson is clear: Equipping our soldiers, sailors, airmen, marines, and guardians with world-class training and weapons redounds to peace.

We must apply that same determination in adopting AI for gathering intelligence, bolstering cybersecurity, increasing battlefield readiness, and combating enemy AI weapons attacks as these systems become cheaper, more powerful, and widely accessible. Specifically, US leaders must confront at least four core national security challenges in the AI age:

1. The autonomous weapons race

2. The rise of AI-powered terrorism

3. The dangerous gap between Silicon Valley innovation and our national security needs

4. The AI alignment problem and containment risk

These are hardly the only threat vectors AI poses. But how we handle them will enormously influence our ability to maintain the strength that produces peace. If we lose our military edge, we will invite a chaotic threat matrix marked by low-cost, high-carnage AI-powered attacks.

Excerpted from “CODE RED: The Left, the Right, China, and the Race to Control AI” by Wynton Hall. Copyright 2026 by Wynton Hall. Published with permission from Broadside Books and HarperCollins Publishers.

Share.
Exit mobile version