NAIROBI, Kenya- TikTok’s efforts to create a safer, rule-abiding platform have led to the removal of over 60,000 Kenyan accounts between April and June this year.
The social media giant revealed in its latest Community Guidelines Enforcement Report for Kenya that these accounts were axed for violations of community standards.
An additional 57,262 accounts were flagged for likely belonging to users under 13, violating age requirements. The recent data reflects TikTok’s continued push to improve content moderation and reduce harmful content globally.
Globally, TikTok’s moderation efforts resulted in the removal of 178.8 million accounts during the same quarter, with a hefty 144.4 million deletions managed through automation.
According to TikTok, automated moderation has become a game-changer in ensuring faster, consistent removal of rule-breaking content across the platform, which sees millions of daily posts.
“With over a billion people and millions of pieces of content posted to our platform every day, we continue to prioritize and enhance TikTok’s automated moderation technology,” the company shared, underscoring its commitment to keeping harmful content at bay.
Automation is now responsible for removing around 80pc of content flagged for violations, a leap from 62pc the previous year.
This shift not only streamlines the content removal process but also reduces the burden on human moderators, shielding them from frequent exposure to offensive material.
In Kenya alone, the second quarter saw the removal of approximately 360,000 videos—representing about 0.3pc of all videos uploaded within the country during that period.
Notably, TikTok’s moderation efforts appear proactive, as 99.1pc of flagged videos were removed before user reports, with 95pc taken down within 24 hours.
This swift action reflects TikTok’s goal of minimizing user exposure to potentially harmful content, achieved by using AI to predict risk and act fast.
The report highlighted the primary reasons behind content removal: 31pc of deleted content dealt with mature themes, 27.9pc involved regulated goods, 19.1pc related to mental health, and 15.1pc fell under safety violations.
Privacy and security, as well as integrity and authenticity, also contributed to TikTok’s takedown decisions, signaling a broad crackdown across sensitive categories.
While TikTok’s aggressive stance on content violations might ruffle some feathers, it’s a clear signal of the platform’s intent to prioritize user safety.
By removing vast amounts of violative content and accounts, TikTok is setting a precedent for more stringent moderation in social media.
As platforms increasingly invest in AI-driven moderation, TikTok’s approach could serve as a model—or a warning—for how automation can manage complex content ecosystems.
With digital platforms facing pressure to prioritize user safety, TikTok’s proactive strategy hints at the future of content moderation.