Meta Platforms will begin tracking how employees work—including keystrokes and mouse clicks—to help train its artificial intelligence (AI) models.

The company, which owns Facebook and Instagram, informed employees on Tuesday that a new internal tool will be installed on Meta-issued computers and applications. The system will log user activity, which will then be used as training data for AI development.

A Meta spokesperson told the BBC: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them.” The company added that the data will not be used for any other purpose and said safeguards are in place to protect sensitive information.

However, the move has raised concern among employees. One staff member, who requested anonymity, described the idea of monitoring every interaction for AI training as “very dystopian,” particularly amid expectations of further job cuts. Another former employee said the tool reflects a growing push within the company to embed AI more deeply into workplace operations.

Meta Platforms has already reduced its workforce by around 2,000 employees through smaller rounds of layoffs this year. Reports also indicate that staff have been anticipating further job reductions in the coming months. The company recently implemented a partial hiring freeze, which appears to have expanded in scope.

Job listings on Meta’s careers website reportedly dropped sharply, from around 800 positions in March to just a handful in recent weeks. The company declined to comment on changes to job postings or potential future layoffs.

The internal tracking system has been identified by Reuters as the “Model Capability Initiative (MCI).” While employee activity on Meta systems was previously accessible for operational purposes, the explicit use of detailed behavioural tracking for AI training marks a new development.
Can You Really Trust Health Advice from an AI Chatbot?
For the past year, Abi has been using ChatGPT, one of the most widely known AI chatbots, to help manage her health.

The appeal is clear. Getting an appointment with a GP can be difficult, while AI tools are available at any time to answer questions. Some AI systems have also reportedly passed medical examinations, adding to their perceived credibility.

This raises broader questions about whether tools like ChatGPT, Gemini and Grok can be trusted for health advice, how they compare with traditional internet searches, and whether they may sometimes provide dangerously inaccurate information.

Abi, from Manchester, who experiences health anxiety, says chatbots feel more supportive than general internet searches, which often highlight the most serious possible conditions.

“It allows a kind of problem-solving together,” she said. “A little bit like chatting with your doctor.”

She adds that her experience has shown both positive and negative sides of using AI for health guidance.

On one occasion, when she suspected a urinary tract infection, ChatGPT reviewed her symptoms and suggested she visit a pharmacist. After a consultation, she was prescribed antibiotics.

Abi says the chatbot helped her access appropriate care “without feeling like I was taking up NHS time,” and provided reassurance for someone who finds it difficult to judge when medical attention is needed.
Meta Platforms to Track Employees’ Clicks and Keystrokes to Train AI Systems