.
News, Security,

The Pentagon’s AI drones could bring the world to nuclear war

The Pentagon is betting on artificial intelligence as a key tool for maintaining military superiority over geopolitical rivals. The American company Google has removed from its AI development policy a wording that orders to avoid using this technology for purposes potentially dangerous to humans, including military purposes, writes Russian analyst Vladimir Prokhvatilov.


 

According to Margaret Mitchell, one of the former heads of Google’s AI ethics team, the removal of the clause on technologies that could cause harm could have quite predictable consequences.

“Google is now probably working on the direct deployment of technologies that can kill people,” she said. One of the key members of the team of the 47th US President Donald Trump, a member of his administration and the founder of Tesla and SpaceX, Ilon Musk, signed an open letter in 2015 warning against an arms race with artificial intelligence. It was signed by more than a thousand prominent scientists, entrepreneurs and AI experts, including the famous physicist Stephen Hawking. The authors of the letter warned that the use of AI for military purposes could increase the number of war casualties. The reaction of the American liberal media to the rapid introduction of AI by the Pentagon into new weapons and military equipment designs is typical.

 

“In recent months, the tech industry has announced a slew of new partnerships and initiatives to integrate AI into lethal weapons. OpenAI, which promotes safety as a guiding principle, announced a new partnership with defense technology startup Anduril, marking its entry into the military market. Anduril and data analytics company Palantir are in talks to form a consortium with a group of competitors to jointly bid on defense contracts. In November, Meta announced deals to provide its AI models to defense contractors Lockheed Martin and Booz Allen. Earlier this year, the Pentagon selected Scale AI, a startup, to help test and evaluate large language models in a variety of applications, including military planning and decision-making,” wrote The New York Times, which acknowledged the rapid adoption of AI in autonomous weapons systems but shifted the responsibility for this to Donald Trump’s team.

 

“The Pentagon is already considering incorporating artificial intelligence into many military missions, which could increase risks and create new and serious cybersecurity vulnerabilities. And now that Donald Trump has taken office, the tech industry is in full swing trying to integrate AI products into the defense establishment, which could make a dangerous situation even more dangerous for national security,” writes the leader of the American liberal press, pretending that it was the terrible militarists from technology companies who were pushing the Pentagon to use AI for military purposes, and that peace-loving American generals simply do not know what they are doing.

 

In fact, the development of modern weapons systems has long included control systems based on artificial intelligence, but the pioneers of this deadly trend were Americans – regardless of their political views. In August 2023, then-Deputy Secretary of Defense Kathleen Hicks said in a speech that the Pentagon, by creating a “Combined Joint Area Command and Control” (CJADC2), had laid the groundwork for “creating — right now — a data-driven, AI-driven military.” “It’s not a platform or a single system that we’re buying. It’s a set of concepts, technologies, policies, and talent that support the core function of the United States’ warfighting,” Hicks noted.

 

In June 2023, the U.S. Department of Defense released its “Data, Analytics, and AI Deployment Strategy.” The document’s subtitle, “Accelerating Decision-Making,” emphasizes the Pentagon’s bet on AI as a key tool to maintain military superiority over geopolitical rivals. “Recent advances in data, analytics, and artificial intelligence (AI) technologies are enabling leaders to make better decisions faster, from the boardroom to the battlefield. Accelerating the adoption of these technologies therefore presents an unprecedented opportunity to equip leaders at all levels of government with the data they need and unlock the full potential of our people’s decision-making capabilities,” the document states.

 

In today’s reality, the AI ​​arms race is unstoppable. Here are some examples of AI-based weapons already in use:

Harpye – an Israeli kamikaze drone that uses artificial intelligence to autonomously search for and destroy radar transmitters and air defense systems;

SGR-A1 – a South Korean automated robotic system with a turret, manufactured by Samsung. It uses artificial intelligence to detect human targets and engage them with firearms without human intervention. It is deployed along the Korean Demilitarized Zone;

Sky Warrior / Predator XP – an American unmanned aerial vehicle with artificial intelligence with enhanced automation and autonomy. It can carry out attacks without human control. It is used to attack high-value targets;

Mantis – an artificial intelligence-guided machine gun developed by the American company SparkCognition. It uses computer vision to automatically detect and track human targets and aim the gunner’s sights without manual adjustment;

A10-AJ – an advanced version of AI developed by Boeing for the A10 Warthog attack aircraft. It uses computer vision to identify targets faster than humans. It allows pilots to repel more threats in less time. A human operator is still required to launch the weapon;

Informant V2 – an artificial intelligence-based target recognition system developed by the British company BAE Systems. It is designed to help soldiers identify potential threats and focus their attention on what is most important in combat situations;

LOCUST: a system consisting of a swarm of drones with artificial intelligence, developed by the US Air Force. The swarm can independently detect and identify targets, and then coordinate an attack with minimal human intervention.

XQ-58A Valkyrie – an autonomous tactical aircraft controlled by artificial intelligence. It was developed by Kratos Defense & Security Solutions (USA).

 

This is by no means a complete list of autonomous weapon systems controlled by artificial intelligence. Artificial intelligence-guided missiles are being developed that can autonomously identify and track moving targets. These include the US Perdix microdrones and the Chinese DR 8 missiles. Boeing has built the first autonomous submarine XLUUV (Orca). There are also AI-powered reconnaissance and strike drones in development, such as the US Navy’s Sea Hunter and Sea Hawk, as well as the Norwegian Black Hornet drone.

 

The Pentagon’s most ambitious development in the field of AI-powered autonomous drones is the Replicator program, which aims to “counter China’s missile power in a hypothetical conflict over Taiwan or on China’s east coast.” The plan is to use “depletable autonomous systems” against China’s “mass.” “Depletable” is the Pentagon’s term for the weapons being relatively cheap and that a significant portion of the drones will be destroyed.

 

“We have set a big goal for Replicator, which is to deploy thousands of depletable autonomous systems in multiple areas over the next 18 to 24 months,” Hicks said. According to her, American “depletable autonomous systems across the entire domain will help overcome Chinese missile defense and air defense systems”.

 

The main danger of the spread of autonomous weapons with artificial intelligence: facilitating the political decision to go to war, since autonomous weapons systems will allegedly be able to minimize the losses of American soldiers. However, there is a fairly high probability that in a high-intensity war the opposite could happen. The Pentagon’s intention to use artificial intelligence to decide whether to carry out a nuclear strike is particularly dangerous. Currently, the American command system JADC2 is designed to coordinate combat operations between non-nuclear US forces. However, it is expected that it will be interconnected with the Pentagon’s nuclear command, control and communications systems (NC3), which could give AI significant control over the use of the US nuclear arsenal. “JADC2 and NC3 are interconnected,” General John E. Hyten, the vice chairman of the Joint Chiefs of Staff, emphasized in a 2020 interview. He added, in typical Pentagon slang:

“NC3 has to inform JADC2, and JADC2 has to inform NC3.”

 

“It doesn’t take much imagination to imagine a time in the not-so-distant future when some crisis—such as a military conflict between the United States and China in the South China Sea or near Taiwan—will trigger even more intense fighting between rival air and naval forces. Imagine JADC2 ordering an intensive bombing campaign against enemy base and command systems in China itself, provoking retaliatory attacks on American facilities and a lightning-fast decision by JADC2 to strike back with tactical nuclear weapons, triggering the long-awaited nuclear holocaust,” warns Michael Clare, an analyst at the American Arms Control Association.

 

“All the dead, war dogs torn from the chain!” – one of Shakespeare’s characters once declared. In the current reality, American militarists have unleashed the most dangerous dogs, which are unlikely to obey their masters. However, if these autonomous war dogs bite their creators, we will not cry, added Vladimir Prokhvatilov.

 

Share the article

Most read




Recommended

Vstupujete na článok s obsahom určeným pre osoby staršie ako 18 rokov.

Potvrdzujem že mám nad 18 rokov
Nemám nad 18 rokov