Your TV may be tracking your viewing data – here’s how to stop it (beyond disabling ACR)


Check HDMI ports and cable quality

Highwings / Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.


It’s been an open secret for years that your TV and other devices track your viewing habits to serve personalized ads and recommendations.

But did you know that most smart TVs also analyze things you watch on devices connected via HDMI? A modern smart TV is just as capable of mining a 20-year-old DVD for advertising data as it is your Netflix queue. Thankfully, we’ve found a few ways to help keep your TV-viewing experience as private as possible.

How HDMI content tracking works

A smart TV will use two methods to track your viewing habits through media played on HDMI-connected devices:

  • HDMI-CEC Metadata: This is a very technical term for an HDMI device’s ID. When you connect a game console, Blu-Ray player, or other playback device, it sends “device ID” data to your TV, primarily to allow a single remote to control your TV and any connected devices. However, it does also track how long you use that device (ex, “Profile A used Input 1 [PlayStation 5] for X hours). 
  • Automatic Content Recognition (ACR): This method feels much more “spy-like” given how much data it tracks. The TV will take tiny “fingerprint screenshots” of each pixel on the screen, regardless of the source, and then feed that data into an algorithm to identify exactly what movie, show, or video game you are playing on an HDMI-connected device.

“One of the most significant findings is that ACR tracking occurs even when the TV is used as a ‘dumb’ display,” according to researchers at the University of California, Davis. This is very general data that companies refine with the next method.

How to stop it (without going off-grid)

There are a few ways to shut down most content-tracking features in the software settings on your TV. Here’s how to lock it down:

Hisense Canvas TV S7 Series

Kerry Wan/ZDNET

Disable ACR (the most important) 

You’ll have to get comfortable, because you’ll likely have to dig through the Terms and Conditions and/or the Privacy and Data Policy that you probably clicked through while initially setting up your TV. Each brand has this setting in a different place, so if you need help finding it, we’ve listed the likely menu options:

  • Samsung: Viewing Information Services
  • LG: Live Plus
  • Vizio: Viewing Data
  • Sony/Google TV: Help & Feedback or Usage & Diagnostics
  • TCL/Roku TV: Smart TV Experience
  • Hisense: Smart TV Experience or Viewing Information Services
  • Fire TV: Automatic Content Recognition

Also: Is your Roku TV tracking you? It’s likely, but there’s a way to stop it

Turn off HDMI-CEC

If keeping track of multiple remotes for all your devices, as well as your TV, doesn’t seem like a big deal, turning off this feature further limits data exchange between your HDMI-connected devices and your TV. And because brands can’t ever make it too easy for us, you’ll have to look in a different place than where you disabled the ACR:

  • Samsung: Settings > Connection > External Device manager > Anynet+
  • LG: Settings > General (or Connection) > Device Connection Settings (or External Devices) > SimpLink
  • Vizio: Menu > All Settings > System > CEC
  • Sony: Settings > Channels & Inputs > External Inputs > Bravia Sync
  • TCL: Settings > Channels & Inputs > Inputs > Control Other Devices (CEC)
  • Hisense: Settings > System/Connection > HDMI & CEC
  • Fire TV: Settings > Display & Sounds > HDMI CEC Device Control 
  • Roku TV: Settings > System > Control Other Devices
  • Google TV: Settings > Display & Sound > HDMI CEC

Use an HDMI CEC-less adapter

If you’re like me and don’t entirely trust all-digital solutions, you can buy a physical CEC blocker device for fairly cheap. This adapter connects to both your TV and your HDMI device, but it lacks the pin connector required to transmit data. This means that you can block tracking pings without affecting video or audio signals. The brand BlueRigger offers single devices, pairs, and sets of four so you can shut down tracking on every TV you own.

Disconnect from the internet or use a VPN

And finally, the inevitable. The only guaranteed way to prevent companies from tracking your data over the internet is to completely disconnect your TV and devices from Wi-Fi and Ethernet networks. They may still share data locally, but without an internet connection, it cannot be sent to a brand’s database for analysis. 

And if you just can’t bear the thought of forsaking digital streaming for the wonderful world of physical media, you can always opt to install a VPN on your TV, which changes your IP address to stop both brands and bad actors from tracking your online activity. 

Also: I found an HDMI CEC blocker that effectively protects your data (and more)

Keep in mind that when you disable these features, it may also affect other ways you can use your smart TV. Obviously, disconnecting from the internet will disable streaming, but disabling ACR and HDMI-CEC could affect features like voice commands or even the search function. 

And this is especially likely when your TV updates its firmware and security settings, so make sure to periodically check your menus to make sure your options are still saved. It may take a little while for you to really see the benefit of blocking data collection, as your TV will still work with the data it has to feed you content suggestions and ads. 

But the long-term goal is to protect your privacy and data so you can enjoy your media again, without worrying about a brand looking over your shoulder.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link