NAIROBI, Kenya- Meta, Facebook’s parent company, finds itself in hot water yet again as former content moderators in Kenya accuse the tech giant and its contractors of ignoring threats to their safety.
The moderators, many of whom worked on removing harmful content from Ethiopia, allege that Meta and its subcontractors failed to act on credible threats from the Oromo Liberation Army (OLA), a rebel group notorious for its violent tactics.
The controversy stems from allegations that Sama, a Kenya-based contractor for Meta, dismissed warnings from moderators about threats from the OLA.
According to court documents filed by British non-profit Foxglove, moderators tasked with removing graphic content linked to the rebel group became targets themselves.
One moderator testified that he received a chilling message from the OLA warning content reviewers to stop taking down their posts or face “dire consequences.”
Another revealed that the rebels sent a list of names and addresses of moderators, causing him to live in constant fear for his safety and avoid visiting his family in Ethiopia.
Initially, Sama allegedly accused the moderators of fabricating the threats but later conducted an investigation and moved one of the targeted workers to a safe house.
Beyond safety concerns, moderators claim they were trapped in an “endless loop” of reviewing hateful content without the authority to remove posts that didn’t explicitly violate Meta’s policies.
According to an affidavit from an expert supervising Ethiopia-related moderation, the company ignored advice from its hired specialists on curbing hate speech in the region.
These accusations come alongside a broader lawsuit by 185 moderators against Meta and its contractors for wrongful dismissal.
The group alleges they were blacklisted from reapplying for similar roles after trying to form a union, forcing them out of work when Meta transitioned contracts to another firm, Majorel.
The fallout from this case could set a precedent for how Meta collaborates with content moderators globally, especially in regions plagued by conflict and disinformation.
This isn’t the first time Meta has faced scrutiny over its handling of harmful content in Ethiopia. A separate lawsuit filed in Kenya in 2022 accused the company of allowing violent posts to flourish, exacerbating tensions in the Ethiopian civil war.
As these legal battles unfold, Meta’s global approach to content moderation—and the safety of those tasked with enforcing its policies—is under intense scrutiny.