I Downloaded (and Deleted) the White House App So You Don’t Have To. It’s a Hot Mess


The White House has a new app, and what’s hiding in the app’s framework is a privacy and security nightmare. 

The app, literally called “The White House,” is designed to “deliver Unparalleled access to the Trump administration,” according to the White House’s announcement on Friday. But it may come at the cost of your personal data, online security and privacy. 

The app is filled with data sharing and security concerns, including location tracking. Security researchers who looked under the hood reported finding lax protections and sketchy features. The White House didn’t immediately respond to a request for comment.

The app is available now for both Android and iOS users, but should you download it? I did (briefly) so you don’t have to. Here’s what’s in the app and why experts are outraged. 

What’s in The White House App?

The app opens with music and a brief collage video of President Donald Trump. It has pages on affordability, including the prices of things like eggs and milk (but not gas). There’s an overtime calculator. And there are links to articles from Trump’s favored news outlets, like Fox News and Newsmax, along with White House press releases.

The app also features livestreams and videos of press briefings, links to the White House’s social feeds and photos of the president.

Why I deleted The White House app so fast 

Behind all those tabs are hair-raising privacy and security issues that have the internet and experts alarmed. 

One X user, @Thereallo1026, decompiled the White House app and blogged about it, reporting that the Android app tracks your location as often as every 4.5 minutes and shares a lot of information, like your location, notifications and perhaps even your phone number, with a third-party server. 

Another red flag is that the code for YouTube embeds comes from a personal GitHub account. Thereallo said that if that GitHub account gets compromised, it can affect every user of the White House’s app. 

Another cybersecurity researcher, Atomic Computer Services, posted similar concerns about the iOS app. The researchers found that the app reported to the App Store that it did not collect location data, when in fact it included the capability to do GPS tracking. It’s unclear whether that tracking actually happens is unclear, but the code is there, Atomic Computer said.

Other concerns identified by Atomic Computer included the removal of privacy consent banners from third-party content viewed in the app and minimal security protections. “We’ve audited apps for startups with three employees that had better security than this,” Atomic Computer wrote.

Government-sponsored apps to inform people are commonplace, but this one poses significant risks, experts said. A spokesperson for the Center for Democracy and Technology, which advocates for transparency and privacy in government technology, told CNET that “mobile apps can be a helpful tool for making government more accessible. But this administration has given people a lot of reasons to worry about their privacy, and this app only raises more questions about what the federal government is doing with our personal data.”

For me, this app is a hard pass. I deleted it 10 minutes after downloading it. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link