AI Deepfake War Videos Flood X Despite New Crackdown

Date:

WASHINGTON, United States — Artificial intelligence-generated war videos are rapidly spreading on the social media platform X despite a new policy aimed at curbing deepfake misinformation during the ongoing conflict in the Middle East.

Researchers say the war has triggered an unprecedented surge in AI-generated images and videos online, many appearing so realistic that users struggle to distinguish them from genuine footage.

Some of the viral clips depict fabricated scenes such as American soldiers allegedly captured by Iran, Israeli cities reduced to rubble, and United States embassies under attack.

The platform recently announced new enforcement measures targeting creators who post artificial content without disclosure.

According to X’s head of product, Nikita Bier, users who publish AI-generated war content without clearly labeling it could face a 90-day suspension from the platform’s revenue-sharing programme.

Repeat violations may lead to permanent suspension, he warned.

The move marks a shift for the platform since its acquisition by Elon Musk in 2022, after critics accused it of allowing misinformation to flourish.

Mixed reactions from officials and researchers

The policy received cautious support from some policymakers.

Sarah Rogers described the measure as a “great complement” to the platform’s Community Notes feature, which allows users to add context or corrections to misleading posts.

However, disinformation researchers say the rule has not significantly slowed the spread of manipulated content.

“The feeds I monitor are still flooded with AI-generated content about the war,” said Joe Bodnar.

“It doesn’t seem like creators have been dissuaded from pushing misleading AI-generated images and videos about the conflict,” he added.

See also  Court of Appeal allows President Ruto’s advisers to remain in office pending appeal

Financial incentives for viral misinformation

Researchers argue that the platform’s monetisation model may unintentionally encourage the spread of sensational or misleading posts.

Premium accounts with blue checkmarks — which can earn revenue based on engagement — have been identified among the sources distributing AI-generated war visuals.

In one widely shared post, an account published an AI-generated video portraying a supposed Iranian nuclear strike against Israel, drawing millions of views.

Another viral clip showed the Burj Khalifa engulfed in flames — a fabricated scenario created with artificial intelligence.

Despite requests to label the content as AI-generated, the video reportedly remained online and continued accumulating views.

Explosions erupt following strikes at Tehran Oil Refinery in Tehran on March 7, 2026.

Fact-checking challenges

AFP’s international fact-checking network has identified numerous AI-generated visuals related to the conflict circulating across the platform, many originating from premium accounts.

The volume of fabricated media is increasing faster than professional fact-checkers can debunk it.

Even the platform’s own AI chatbot, Grok, has occasionally misidentified AI-generated war images as authentic when users asked for verification.

Questions over enforcement

Researchers say enforcement of the policy may prove difficult.

Many users spreading manipulated content are not part of the revenue-sharing programme, meaning demonetisation rules may have limited reach.

Studies have also raised concerns about the effectiveness of Community Notes.

A report by the Digital Democracy Institute of the Americas found that more than 90pc of submitted Community Notes are never published, limiting their ability to counter misinformation at scale.

Still, some analysts believe the new policy could reduce incentives for spreading disinformation if properly implemented.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, said the rule represents a reasonable attempt to address the problem.

See also  INTERPOL Operation Nets 94 Cybercrime Suspects, Shuts 45,000 Malicious Servers

“In principle, this policy reduces the incentive structure for those spreading disinformation,” Mantzarlis said.

But he cautioned that enforcement remains uncertain.

“The devil will be in the implementing detail. Metadata on AI content can be removed and Community Notes are relatively rare,” he said.

“It is unlikely that X will be able to guarantee both high precision and high recall for this policy.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Trending

More like this
Related

Deputy President’s Office Seeks Sh1.89 Billion in Supplementary Budget

NAIROBI, Kenya — The Office of the Deputy President...

France Deploys 800 Troops to Kenya for Maritime Security Training

MOMBASA, Kenya — France has deployed at least 800...

Suspect Arrested With Cannabis Worth Sh8.1 Million After Highway Chase

NAKURU, Kenya — Detectives have arrested a suspected drug...

FIA Cancels Bahrain and Saudi Arabian Grands Prix Over Middle East Situation

PARIS, France — Motorsport’s global governing body has cancelled...