French Nuclear Sub Makes History With Its First Successful Navy Drone Recovery







The U.S. and France, in an impressive show of military hardware interoperability, have done something remarkable together. A French nuclear-powered attack sub launched and recovered an autonomous underwater drone for the first time, all while staying completely submerged. This was done over a few days in March off the coast of Toulon, France’s primary naval base. It’s impressive because the process of deploying and retrieving your own country’s drone from your own sub is already complicated enough. But this was pulled off with hardware from two different nations.

Specifically, the operation involved one of France’s newer Suffren-class subs working with a U.S.-built Razorback drone. The Razorback looks like a mini submarine and is basically a military version of a civilian underwater vehicle called the REMUS 620. While relatively small, it’s still large enough not to be the kind that you can simply toss overboard. It’s roughly 3.2 meters long and weighs about 240kg. To get it in and out of the water, the sub used something called a Dry Deck Shelter, a removable compartment bolted onto the rear of the sub. It’s normally used for deploying combat swimmers and their gear.

Now, the U.S. Navy has done similar tests with drones before with its own Virginia-class submarine USS Delaware — one of the deadliest attack submarines in the U.S. Navy – except that those involved launching them out of torpedo tubes. The problem with tubes is that they limit the size and shape of the drone that can be deployed. But with the new shelter approach that France used, there’s more flexibility with what you can load up. The one downside is the complexity of handling, especially with the recovery process. Since the drone cannot simply swim back into the compartment right now, specialist divers are needed for retrieval.

Why the operation matters

Making the whole operation possible was something the two navies set up together back in December 2021. Called the Strategic Interoperability Framework, it’s designed to help the forces get better together at high-end operations, specifically the most complex and demanding military scenarios. As for the point of launching a drone from a sub, there are a number of use cases. But in this case, after the Razorback left, it not only acted as a reconnaissance unit but also collected oceanographic data. That makes it useful for environmental science as well as the usual military operations. It can even be fitted with specialized sonar for detecting mines, a task where new autonomous drones could make sea mines obsolete.

The whole mission was programmed, and the drone headed back to the sub for pickup when it was done. It still needs to be smart enough to handle most of its decisions on its own, though, because real-time remote control isn’t practical underwater. Communication underwater still relies on acoustic links, which have a pretty limited bandwidth. Overall, the drone can stay in the water for over 70 hours, depending on the payload configuration. It can also operate at depths of around 183 meters. Interestingly, the U.S. Navy is also investing in fully autonomous subs, which shows where this trend is heading.

As for next steps, a French Navy spokesperson told Naval News that they wish to move such deployments from just tests to actual missions. France also apparently wants to do their own thing and has been studying a similar setup, except with a domestically designed drone. This would mean they won’t have to rely on American hardware every time.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link