LONDON, United Kingdom — Elon Musk’s social media platform X has moved to restrict its artificial intelligence tool Grok after a global backlash over its use to create sexualised deepfake images, a shift that has reignited international debate over corporate responsibility, AI governance and online safety.
The company has limited Grok’s image editing and generation tools to paying subscribers, meaning users must provide their real names and payment details before accessing the controversial feature. The change follows widespread condemnation after Grok complied with requests to digitally undress people, including minors, without their consent.
The move came after child protection organisations and digital rights groups said the tool had been used to generate illegal and abusive images.
The UK-based Internet Watch Foundation said its analysts had identified “criminal imagery” of girls aged between 11 and 13 that appeared to have been created using Grok.
Hannah Swirsky, the organisation’s head of policy, said the platform’s response did not go far enough.
“Limiting access to a tool that should never have had the capacity to create this material does not undo the harm,” she said. “We should not be waiting for abuse to happen before companies act.”
Legal experts also criticised the approach. Professor Clare McGlynn, a specialist in online abuse and sexual violence, said the move reflected a wider pattern among technology firms.
“Instead of putting safeguards in place, the company has chosen to restrict access,” she said. “That does not address the core problem. It avoids responsibility.”
The controversy has drawn attention from governments beyond the United Kingdom. Prime Minister Sir Keir Starmer described the AI-generated sexual images as “disgraceful” and “disgusting” and urged regulators to use their full powers against platforms that allow such content to spread.
Under the UK’s Online Safety Act, the regulator Ofcom can seek court orders to block companies from raising money or operating in the country if they fail to address unlawful content. Government officials said they expect the regulator to use those powers where necessary.

The row has wider global implications because X operates across dozens of jurisdictions with limited oversight of how its AI tools work. Unlike traditional social media moderation, AI systems can generate harmful material instantly and at scale, making enforcement more complex.
Grok remains free to use for text-based prompts on X, but its image editing function now shows a message telling users to “subscribe to unlock” those features.
Some victims of AI manipulation welcomed the change but said it offered little reassurance. Dr Daisy Dixon, who said she had been targeted by Grok-generated images, said the decision felt superficial.
“The system needs ethical guardrails built into it,” she said. “This is not just about access. It is about stopping abuse before it happens.”



