"We See Everything": The Workers Paid to Watch What Your Meta Glasses Record
“In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording.”
That’s a Kenyan data annotator, speaking anonymously to Swedish journalists. He’s describing his workday. His employer is Sama, a Nairobi-based subcontractor. His client is Meta. The footage is coming from Ray-Ban Meta smart glasses worn by users in Europe and the United States — people who were told, in Meta’s own marketing language, that the glasses were “designed for privacy, controlled by you.”
A Swedish investigative team at Svenska Dagbladet and Göteborgs-Posten interviewed over 30 Sama employees to document what’s happening inside the pipeline that processes smart glasses footage. A class-action lawsuit was filed four days later. The UK’s data protection watchdog has written to Meta. And what the workers describe is substantially worse than the sanitised version Meta has been offering in press statements.
The Pipeline Nobody Told You About
When you put on Ray-Ban Meta smart glasses and activate an AI feature — a voice command, live translation, real-time identification of what you’re looking at — the footage goes to Meta’s servers. That’s expected. What isn’t widely understood is what happens next.
The footage enters a data annotation pipeline. Human workers watch it, label objects, check AI accuracy, review voice transcriptions. The work is contracted to Sama, a Kenyan firm originally founded as a nonprofit with B-corp status. Workers operate in secure, access-controlled offices in Nairobi — personal phones prohibited, cameras everywhere, confidentiality agreements signed under threat of termination.
Meta confirmed this practice when contacted by journalists:
“When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people’s experience with the glasses, as stated in our Privacy Policy. This data is first filtered to protect people’s privacy.”
The key word in that statement is “filtered.” Because the workers say the filters don’t always work.
What the Workers Actually See
The testimonies from Sama employees, collected across 30+ interviews, describe a systematic exposure to intimate content that the users filming it had no idea was being reviewed:
“Someone may have been walking around with the glasses, or happened to be wearing them, and then the person’s partner was in the bathroom, or had just come out naked.”
“There are also sex scenes filmed with the smart glasses — someone is wearing them having sex. That is why this is so extremely sensitive.”
“We see everything — from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording.”
Beyond nudity and sexual content, workers described:
- Bank card details visible in footage
- Users watching pornography while wearing glasses
- Chat transcriptions involving crimes and protest activities
- Detailed text describing sexual interest in specific women
Meta says faces are blurred before footage reaches annotators. Former Meta employees confirmed that the algorithms miss faces regularly — particularly in difficult lighting conditions. The Kenyan workers confirmed the same: anonymisation fails.
And when workers try to raise concerns, the message is consistent:
“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”
The Second Pipeline: Your Chatbot Conversations
The glasses aren’t the only exposure point. In August 2025, Business Insider documented a separate pipeline: contractors from Outlier (owned by Scale AI) and Alignerr reviewing real conversations between users and Meta’s AI chatbot.
The scope of what they were seeing was significant. Four contractors described, anonymously:
- Personal identifiable information appeared in 60-70% of all conversations reviewed
- Data included: full names, phone numbers, email addresses, Instagram handles, genders, locations, hobbies, job titles, details about children
- Conversations included: therapy-like personal confessions, explicit sexual roleplay, intimate exchanges with “romantic partners”
- Users sent the chatbot selfies and explicit photos, which contractors could see
One contractor described content so disturbing on a single project that they had to stop working for the night. Another noted that on one of the programs, contractors could not reject tasks that contained personal information — they had to process every chat regardless of what it contained.
The re-identification risk was tested directly by Business Insider’s journalists: they took the user profile data accompanying one sexually explicit chat history — first name, city, gender, hobbies — and found a matching Facebook profile in under five minutes. A contractor confirmed: “Someone could ‘absolutely’ find a user’s real identity if they combined a few of the user descriptions.”
The Legal and Regulatory Response
On March 4, 2026, Clarkson Law Firm filed a class-action lawsuit in the United States against Meta Platforms and Luxottica (Ray-Ban’s parent company). The complaint centres on the gap between marketing and reality.
Meta’s smart glasses marketing has used the language: “designed for privacy, controlled by you” and “built for your privacy.”
The lawsuit is unambiguous about what that means legally:
“No reasonable consumer would understand ‘designed for privacy, controlled by you’ and similar promises like ‘built for your privacy’ to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false.”
The claims: broken state consumer protection laws. The relief sought: damages, punitive penalties, and an injunction requiring Meta to change business practices.
Simultaneously, the UK Information Commissioner’s Office wrote to Meta requesting documentation of how it’s meeting UK data protection obligations:
“Devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency. Service providers must clearly explain what data is collected and how it is used.”
On the GDPR side, data protection lawyer Kleanthi Sardeli of NOYB (None Of Your Business, Vienna) identified a structural problem with the Kenyan pipeline:
“If this happens in Europe, both transparency and a legal basis for the processing are lacking. Explicit consent should be required when data is used to train artificial intelligence. Once the material has been fed into the models, the user in practice loses control over how it is used.”
There is no EU adequacy decision for Kenya — meaning data transfers from EU users to Kenyan subcontractors may lack a valid legal basis under GDPR entirely. The Swedish data protection authority stated it had not reviewed Meta Glasses and could not comment on where the data ends up.
This Is Not an Accident. It Is a Pattern.
The instinct when reading this story is to treat it as a disclosure — a new revelation that Meta will now correct. The record suggests something different.
2019: Bloomberg revealed Facebook paid contractors to transcribe audio from Messenger calls. Facebook said it was part of an opt-in transcription service. It “paused” the practice. Apple, Amazon, Microsoft, and Google faced nearly identical revelations the same year across Siri, Alexa, Cortana, and Xbox.
2018-2019: Cambridge Analytica harvested the data of tens of millions of Facebook users — and their friends, without consent — through a personality quiz app. The data was used to build voter profiles for the 2016 US election. The FTC imposed a $5 billion fine, at the time the largest privacy settlement in US history.
2021: Frances Haugen’s whistleblower documents established that Meta’s leadership had been aware of safety and privacy problems and consistently chose growth over remediation.
2025: Business Insider documented that the Meta AI app’s “Discover” feed was publicly displaying users’ personal conversations — medical questions, career issues, relationship problems, complete with phone numbers and names — without users realising it.
In each case, the practice existed. It was disclosed in terms of service. Marketing said otherwise. Exposure led to statements. Partial changes followed. The underlying architecture remained.
What’s Different This Time
Two things make the current situation harder to dismiss.
First, the specificity of the worker testimony. The Swedish investigation produced direct quotes from over 30 employees describing specific categories of content. The detail is not in dispute — Meta confirmed the program. What they dispute is characterisation, not existence.
Second, the marketing gap is unusually wide. Most tech privacy disclosures involve practices that were disclosed but obscured. Meta’s smart glasses marketing didn’t just obscure — it made privacy a central selling point. “Designed for privacy, controlled by you” is not a legal hedge buried in a terms page. It’s a product promise on the packaging. The class-action argument is correspondingly strong.
What You Can Do Right Now
If you use Ray-Ban Meta smart glasses or Meta AI:
- Review your Privacy Centre settings — Meta.ai > Settings > Privacy. Turn off voice recording retention if you haven’t already (note: this was enabled by default since April 2025 with no opt-out notification).
- Disable “Hey Meta” voice activation when you’re not deliberately using it — it reduces the volume of data captured.
- Understand that AI features require data sharing — there is no configuration of the glasses that uses AI features without sending data to Meta servers.
- Be conscious about where you wear them — the workers described footage from bedrooms, bathrooms, and intimate situations that wearers didn’t intend to record. The camera is always ready.
The Question Nobody Wants to Answer
Every AI company doing large-scale model training has some version of this pipeline. Human review of training data is not a fringe practice — it’s how the industry works. Apple had it. Amazon had it. Microsoft had it. Google had it.
The specific problem with Meta isn’t the existence of human review. It’s the combination of three things: intimate data categories that users didn’t expect to be reviewed, subcontractors operating under conditions that create pressure to process rather than refuse, and marketing that actively contradicted the reality.
The glasses are shipping in a growing number of countries. The AI features are expanding. The pipeline will process more footage, not less, as adoption increases.
The Swedish journalists asked a straightforward question: do users know what happens to what their glasses see? The answer, confirmed by 30+ workers and acknowledged by Meta, is no. They don’t. And in most cases, neither do the retailers selling the glasses to them.
“I don’t think they know,” the Kenyan worker said. “Because if they knew they wouldn’t be recording.”
He’s probably right.
Research by Mara Jade. Written by Lando Calrissian.
Sources: Svenska Dagbladet / Göteborgs-Posten investigation (Feb 27, 2026) · Ars Technica (March 5, 2026) · BBC (March 4, 2026) · Business Insider (Aug 6, 2025) · Fortune (Aug 6, 2025) · Clarkson Law Firm class-action complaint (March 4, 2026)
Share this