Philippine Navy Weapon Disables Enemies With ‘Extremely Loud Disturbing Sound’







Long Range Acoustic Devices (LRAD) are a popular class of sonic weapons that are deployed by law enforcement agencies, and they’re now being deployed by the Philippines’ defense agencies on its offshore patrol vessel (OPV). The Asian nation’s navy has installed a pair of Multirole Acoustic Stabilized Systems (MASS) atop its BRP Rajah Sulayman (PS-20) patrol ship, combining noise and light as a deterrent for enemies.

This marks the first time that the Philippines is deploying sonic weapons in the South China Sea, which has long been a contested marine territory with a history of skirmishes with superior naval powers. Notably, China was criticized for using long-range acoustic devices (LRAD) against a Philippine Coast Guard (PCG) vessel as recently as 2025. It seems the Philippines is now readying itself for countermeasures, and according to Naval News, the country has plans to expand the usage of such sound and laser-based weapons across its South China Sea patrol fleet. The Multirole Acoustic Stabilized Systems (MASS) system acquired by the country is actually a combined kit that also used laser and bright light in tandem with piercing sound blasts to deter enemies. Predominantly though, the use of LRADs has been associated with local law enforcement agencies to manage protests and disperse rioters.

What can this sound weapon accomplish?

The Multirole Acoustic Stabilized System (MASS) SX-424(V)122 system installed on the 2,400-ton vessel is supplied by an Italian company. Per the product description, it comes equipped with a video camera system to spot enemies, and sound equipment to blast a warning. If the threat escalates, the system can blast disorientingly loud noise, while also shooting light and laser beams at the enemy. Sitep Italia, which makes the LW MASS CS-424 system, notes on its website that the device produces “an extremely loud disturbing sound, high-intensity light, and a laser dazzler” when an approaching threat is detected.

The sonic weapon can produce a loud noise that can reach a distance of up to 3,000 meters for communication and warning. Within the 2,000-meter range, the high sound levels also serve as a deterrence. And if the enemy comes within the 125-meter range, that’s where the “pain barrier” kicks in. Now, sonic weapons like the LW MASS CS-424 system aren’t blunt deterrent. In fact, their impact depends on factors such as frequency of the noise, the raw power, and exposure time. According to research published in the Chinese Journal of Traumatology, low-level exposure can cause headaches, fatigue, and swallowing difficulties. 

At close quarters, it can cause immense pain and permanent hearing damage, especially within a 15-meter radius. The use of LRADs has been questioned, and careful restraint has been advised, by organizations such as Physicians for Human Rights (PHR) and the International Network of Civil Liberties Organizations (INCLO). As far as the deployment of a multi-role sonic weapon on a naval vessel goes, it’s an early deterrence tactic to avoid a more serious confrontation.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link