Mixing AI And The Military Can Be Dangerous






There’s no denying the influence that AI has already had on the technology we use every day. Much of the opposition around its rise centers around where and how it is used. You might not know of the many different ways the U.S. Air Force is already using AI, but its efficiency-boosting and logistics-simplifying possibilities are huge. However, it’s vital to tread carefully when integrating AI into military matters. The AI Guardrails act, introduced by Michigan Senator Elissa Slotkin in March 2026, sets out to provide exactly what the name suggests: An overriding human influence and final decision behind AI’s work.

As detailed in a government press release, Senator Slotkin explained that her proposed bill would focus on three key areas: “Ensur[ing] a human is involved when deadly autonomous weapons are fired, AI cannot be used to spy on the American people, and that a human is on the switch to launch nuclear weapons.” These measures aren’t intended to curtail the advancement of the U.S. AI industry, but rather to protect the nation’s dominance in the area (“we must win the AI race against China,” the Michigan lawmaker added) while ensuring that it develops in a safe and practical way.   

Malfunctions, glitches, and mistakes, after all, are far from unheard of in the AI sphere. Human judgment and decision-making certainly aren’t infallible either, of course, but the best way to get the benefit of both is to use them in tandem. Here’s how the bill could help the United States do that.

More details about the AI Guardrails Act

In the press release detailing the measures included in the AI Guardrails act, Senator Slotkin declares restricting the technology’s ability to strike with autonomous weapons without oversight, banning it from launching nuclear weapons, and preventing its use to contribute to widespread surveillance of the population to be “just common sense.” Quite understandably, these are not new concepts. For instance, Department of Defense Directive 3000.09 notes that weaponry should be created with the concept of “allow[ing] commanders and operators to exercise appropriate levels of human judgment over the use of force.” This is partially why some weapon systems with automated capabilities, such as the Navy’s Phalanx CIWS, will also have modes that ‘indicate’ targets but require authorization to fire at them, as well as automatic modes (when enabled). 

This bill’s intent, in short, it to make these three potential use cases of AI illegal. As for why, the document itself simply explains, “Some military command decisions are too risky and too consequential for machines to decide.” This also has the effect of helping to ensure responsibility and accountability for each military decision made, waters that can become rather muddied when an AI system acts more independently. 

The move can be seen as an interesting advancement of the five principles of ethical artificial intelligence, which were taken on board as part of the department’s AI development strategy in February 2020. They state that its use of AI is to be equitable, governable, reliable, responsible, and traceable. With the bill being new to the agenda at the time of writing, it’s not yet known how it will fare with the Michigan Senator’s fellow lawmakers. It could be a significant step to steering AI’s development safely in one of its most potentially dangerous areas. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link