Hacking Traffic Cameras & Cyberpunk Surveillance Ops






There is no denying that modern warfare has become heavily technology-dependent. Recent conflicts such as the Russo-Ukrainian war and the ongoing Iran war have amply demonstrated this new reality. Often, when we talk about technology in warfare, the discussion veers towards missiles, fighter jets, attack helicopters, and aircraft carriers. However, what is usually underreported are the various advanced ways in which countries conduct covert operations and gather intelligence. In the case of the current Iran war — which began on February 28, 2026 — it was not just the fighter jets, Tomahawk missiles, and bunker-buster JDAMs that were doing the heavy lifting. To ensure that all this advanced hardware hits the intended target, both Israel and the U.S. depended on data from a comprehensive intelligence gathering mechanism they had put together long before the conflict even began.

This process was best exemplified in the assassination of Iran’s former Supreme Leader Ayatollah Khamenei, who was killed in an airstrike on the very first day of the conflict. To ensure that Israel got the Iranian leader’s accurate, most up-to-date location, Israel had hacked into the Iranian traffic camera system several years ago. With years of continuous monitoring, the Israelis figured out the movement patterns of Iran’s top leader, and on the day of the operation, they merely used the up-to-date location information to ensure a targeted strike.

What is remarkable here is that the operation to assassinate Khamenei did not merely depend on visual intelligence. Aside from traffic cameras, Israel used multiple sources of data — including human intelligence (read: spies), intercepted communications, and satellite imagery — to pinpoint a 14-grid coordinate that revealed Khamenei’s location. Once the location was confirmed, the rest of the mission was carried out by Israeli warplanes, according to a CNN report.

Cyberattack on Iran and the subsequent internet blackout

On the day Israel carried out the operation that led to the death of Khamenei, Iran also faced a massive internet blackout. According to JPost, the actual air strikes were accompanied by cyber attacks that hit Iran’s critical digital infrastructure. Most of the country’s communications systems were nonfunctional, and Iranian leaders were reportedly unable to communicate with one another. The Israelis also targeted Iranian websites — including most of the country’s news websites. The attacks also resulted in the website of the Iranian news agency IRNA being taken offline.

In addition to these targeted attacks, the blackout also led to the failure of digital services across most Iranian cities. Local apps, digitized government services, and digital banking platforms all stopped functioning. Following the cyberattacks, the Iranian regime imposed a total internet blackout across Iran, which is still ongoing.

Aside from the cyberattacks, the Israeli Defense Force also carried out what it referred to as a “wide-scale strike” that targeted the headquarters of the Iranian Islamic Revolutionary Guards Corps (IRGC). This headquarters also reportedly housed the nerve center of the IRGC’s “cyber and electronic headquarters” and its “Intelligence Directorate”. This is the same wing that was accused of engaging in several cyber operations targeting the U.S., including a hack-and-leak attack that reportedly targeted U.S. President Donald Trump’s 2024 presidential campaign.

While Israel’s hacking of Iran’s traffic camera network is interesting in itself, it is not the only country involved in such operations. Recently, India also began a crackdown on Chinese-origin CCTV cameras after it was discovered that Pakistan had gained access to real-time security footage from CCTVs installed across sensitive locations during the Indo-Pakistan conflict of 2025.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link