Social media giants TikTok and Meta Platforms are facing renewed scrutiny after whistleblowers and media investigations alleged that both companies may have compromised user safety while competing to dominate the global social media landscape through powerful recommendation algorithms.
The allegations have sparked fresh debate about the responsibility of major technology companies to protect users from harmful content, particularly as billions of people rely on social media platforms for news, entertainment and communication.
At the centre of the controversy is the race between TikTok and Meta to develop increasingly sophisticated algorithms that determine what content appears on users’ feeds.
These systems are designed to keep people watching, scrolling and interacting for as long as possible.
Recommendation algorithms have become the backbone of modern social media platforms. These systems analyse user behaviour, including watch time, likes, comments, shares and viewing patterns to determine which posts are most likely to keep a user engaged.
TikTok revolutionised this model with its highly personalised ‘For You’ page, which quickly learns user preferences and pushes short videos tailored to individual interests. The system has been widely credited for the platform’s explosive growth worldwide.
The success of TikTok forced rivals to adapt. Meta, which owns Facebook and Instagram, rapidly expanded its own short-form video offerings through features such as Reels in an effort to compete with TikTok’s popularity.
Technology analysts say this competition has evolved into what insiders describe as an “algorithm arms race,” where platforms continually tweak their systems to maximise user engagement.
Concerns about the risks of engagement-driven algorithms have surfaced through whistleblower accounts, investigative journalism and legal filings.
According to reports citing former employees, some internal research at major technology companies suggested that content generating strong emotional reactions including anger, shock or outrage often performs best within algorithmic systems.
Because recommendation engines are designed to prioritise engagement, critics say such content can be amplified more widely across platforms.
Investigations by media organisations including the BBC cited insiders who claimed engineers and safety teams sometimes raised concerns that certain features were rolled out rapidly in response to competitive pressure.
One former employee reportedly alleged that the push to replicate TikTok’s success encouraged Meta to prioritise engagement metrics when launching short-video features such as Instagram Reels.
The whistleblower suggested that this strategy sometimes allowed more controversial or borderline content to circulate because it generated stronger reactions from users.
Digital safety experts warn that algorithm-driven feeds can unintentionally promote harmful material.
Content that triggers strong emotional responses — whether positive or negative — tends to generate more comments, shares and interactions. This can encourage recommendation systems to push such posts to larger audiences.
Former workers at TikTok have also raised concerns about the complexity of the platform’s recommendation system. Some insiders reportedly described the algorithm as a “black box,” meaning even engineers and moderators may struggle to fully understand how certain videos gain widespread distribution.
One of the most significant concerns raised by child-safety advocates is the potential effect of algorithm-driven platforms on younger audiences.
Both TikTok and Meta platforms attract millions of teenage users, many of whom spend several hours each day browsing content.
Recommendation algorithms can quickly build highly personalised feeds based on what a user watches or interacts with. Researchers say this can create feedback loops where similar types of content appear repeatedly.
In some cases, experts warn this could lead young users into “rabbit holes” of increasingly narrow or extreme content, potentially exposing them to misinformation, bullying, unhealthy beauty standards or other harmful trends.
During these hearings, experts told lawmakers that engagement-driven business models may sometimes conflict with user safety goals.
Both TikTok and Meta have rejected claims that they ignore user safety in pursuit of engagement.
The companies say they invest heavily in trust and safety teams, artificial intelligence moderation systems and content policies designed to remove harmful material from their platforms.
Meta has emphasised that its algorithms consider multiple factors, including content quality and safety signals, rather than simply promoting posts that receive the most engagement.
TikTok has also pointed to features such as parental controls, screen-time limits and content moderation systems aimed at protecting younger users.
The companies maintain that they continuously refine their algorithms to reduce the spread of misinformation and harmful content.


