Meta Will Track Employees’ Keystrokes, Clicks and Mousing to Train AI


Meta will track its employees’ keystrokes, clicks and mouse movements — and even capture screenshots of what’s on their computer screens — to help train the company’s AI models. That’s according to a Reuters report on Tuesday, citing an internal memo sent to workers.

According to the memo, Meta will install a new software program called the Model Capability Initiative on the computers of US-based employees and contractors. The tracking software will operate on work-related apps and websites and is part of Meta’s plan to build AI agents that can do tasks autonomously.

AI Atlas

The announcement, published in its entirety by Business Insider, said that monitored apps and URLs would include Gmail, GChat and Metamate, an employee AI assistant. Workers’ phones would not be included in the tracking.

Business Insider reported that Meta employees were “up in arms” about the plan to use tracking software.

On an internal communications website seen by the news outlet, one employee wrote, “This makes me super uncomfortable. How do we opt out?”

Meta CTO Andrew Bosworth responded, “There is no way to opt out on your work laptop,” prompting staff to react with shocked, crying and angry emoji, according to Business Insider.

As it invests in AI development — more than $135 billion this year — Meta continues to reduce headcount. The company plans to lay off about 8,000 employees, 10% of its workforce of 79,000, starting May 20. The company reportedly has cut 25,000 jobs since 2022.

Meta’s AI surveillance 

Meta wants to train its AI on tasks it cannot yet replicate, focusing on how people actually use their computers. This includes such actions as selecting options from dropdown menus and using keyboard shortcuts.

“This is where all Meta employees can help our models get better simply by doing their daily work,” the memo said.

Reuters said the memo was posted by an unidentified AI research scientist on Tuesday in a channel for the company’s SuperIntelligence Labs team.

According to Reuters, Bosworth told employees that the long-term vision was for AI agents to “do the work” while employees direct them and help them improve. He did not specifically say how the agents would be trained with the data, but did say that Meta would rigorously gather data “for all the types of interactions we have as we go about our work.”

Eric Null, director of the Privacy and Data Project at the digital rights organization Center for Democracy & Technology, said Meta’s plan to track employee computer interactions is one of the most “invasive” forms of workplace surveillance.

“That invasiveness underscores the need for clear privacy protections and AI guardrails,” Null told CNET. “This type of surveillance can cause real harm to people with disabilities, and workers in general chafe at this kind of tracking. Using this data for AI training in particular has the potential to replicate structural biases.”

In a statement given to CNET, a Meta spokesperson said that tracking employees is intended to give AI models “real examples” of how people interact with their computers.

“To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models,” the spokesperson said. “There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”

Meta said it would not use the collected data in performance reviews and that managers would not be able to see it.

Business Insider cited an unnamed source saying that, when hired, employees are told their work devices can be monitored by Meta.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Gemini on Android Auto

Kerry Wan/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Gemini is now widely available in Android Auto.
  • It can integrate with Google services and other apps.
  • The AI answered both simple and complex, multi-step questions. 

Despite Google’s insistence on packing artificial intelligence into nearly every conceivable product, I haven’t really found too much day-to-day use for it. That might change now. 

Over the weekend, I noticed my Android Auto had updated to include Gemini. I decided to give it a quick test, and it deftly answered my questions. When I started to dive deeper, though, I was surprised by just how much it could do and how easily it handled what I thought were more complex asks.

Also: Your Android Auto just got 5 useful upgrades for free – and Google isn’t done

Here are some of the best ways I’m using the new Gemini integration. To get started for yourself, you can either use the mic button on your steering wheel or say “Hey Google.” 

1. Finding hours or other information about local businesses

When using my phone in the car, most of the time I’m checking hours for a local business or researching nearby restaurants or stores. I found that Gemini is perfect for quick, simple questions like, “What time does Tony’s Ice Cream close?” But it’s also great for diving a little deeper.

I’m the type of person who likes to do a lot of investigating when I’m trying to find a new restaurant. I like to know what makes each one special and what people recommend — before I decide. Gemini does very well in situations like this. 

Also: Google just gave Android Auto its most significant update yet – and we tested it on the road

I asked for the best local spots to find ice cream. Instead of just showing a list, Gemini began detailing each spot, noting that the number one recommendation was “a legendary local spot with more than 100 years of history scooping up happiness.” It went down the list, offering up recommendations about each option, and then it even asked which one I wanted to navigate to.  

2. Tracking down info deep in your email

My wife and I had tickets to a show this weekend, and while I knew where I was going, I decided to see if Gemini would help. Without mentioning the theater or the show’s name, I just asked, “What’s the address for the show tonight?” Gemini thought for a few seconds and then replied that my confirmation email didn’t mention an address before asking, “Do you want me to find that information online?” When I said I did, it quickly found the address and offered to start navigation.  

I asked Gemini several other email-specific questions like “What’s coming in the mail today?” (thanks to USPS Informed Delivery) and even some vague ones like “When is that thing I ordered from the TikTok shop arriving?” or “I remember a coupon for a haircut in my email, when does that expire?” It handled each one perfectly.

Also: How to clear your Android phone cache – and why it greatly improves performance

Instead of opening my Gmail app, scrolling to find what I need or searching, and then opening the message, I can now get this info quickly with Gemini’s help.

3. Getting answers on the go, and keeping the conversation going

I’m the type of person who immediately looks up the answers to random questions that pop in my head — things like, “Where is the Australian Shepherd dog breed from,” “How do I make polymer clay earrings?” (my wife had seen some at a vendor fair), or “How do I make an electromagnet for an elementary school science project?”

Instead of Googling these queries, I asked Gemini. I wasn’t surprised to get a response, but I was surprised by how Gemini offered to keep things going. Every time Gemini offered an answer, it would ask if I wanted to talk more. I found myself having a conversation about my dog and why he doesn’t shed nearly as much as my other one, about the best way to present my son’s electromagnet, and even about different ways to make clay earrings and which option was best. 

4. Saving reminders and notes

I live by my Google Calendar, and if I don’t have something saved there, there’s a good chance I’ll forget it. The same goes for my reminder list in Google Keep. Quite often, while I’m driving, I’ll have a thought I want to remember later. Gemini, through Android Auto, was able to add things to my Keep lists and add things to my Calendar. It also gave me a rundown of what’s on my calendar and even asked if I wanted help getting ready for a meeting tomorrow (which was actually my wife’s event on our shared calendar). 

Also: The best AI chatbots: Expert tested and reviewed

5. Picking the perfect playlist

When it comes to the radio in my car, I’m constantly bouncing between podcasts, the song that got stuck in my head because it was viral on TikTok, whatever my kids request, or a huge variety of other songs. That means I’m often bouncing between Spotify, YouTube, and my XM radio. 

I often want to hear a specific song or album, and I was able to get Gemini to pull up specific songs using Spotify and YouTube and to stick to songs from that album. When I was in a more general mood, I got Gemini to tune to a specific XM station for me. 

I haven’t stumped AI yet

Overall, I’m finding that Gemini can handle at least 90% of tasks I’d otherwise pick up my phone for, from basic questions to more in-depth, multi-level questions. It was able to integrate with Google services like Gmail and apps, but also several other apps. 

Also: Google’s Gemma 4 model goes fully open-source and unlocks powerful local AI – even on phones

The basic questions are more common, but the ones that require research are where Gemini shines. I kept trying to think up new things to ask, and I had trouble finding something that would genuinely stump the AI. If, like me, you haven’t really embraced Gemini yet, Android Auto might just be your ticket in. 





Source link