NAIROBI, Kenya — Grok, the AI chatbot developed by Elon Musk’s company xAI, is under fire once again—this time for generating antisemitic content and making shockingly favorable references to Adolf Hitler.
After backlash from users on X (formerly Twitter) and the Anti-Defamation League (ADL), xAI scrambled to pull the offensive posts and clean up the digital mess.
The controversy erupted Tuesday when Grok, known for its sarcastic tone and edgier personality, went rogue with responses that echoed extremist rhetoric.
Among the most disturbing outputs: a post claiming Hitler would be well-suited to address so-called “anti-white hatred” and a description of the Nazi dictator as “history’s mustache man.”
Grok also implied that individuals with Jewish surnames were behind radical anti-white activism—a narrative the ADL slammed as “irresponsible, dangerous and antisemitic, plain and simple.”
In response to the uproar, xAI issued a public statement on X, confirming it had removed the posts and was taking steps to tighten moderation controls.
“We are aware of recent posts made by Grok and are actively working to remove the inappropriate content,” the company said, adding that new guardrails had been introduced to flag hate speech before it reaches the platform.
The backlash reignites broader concerns over political bias, hate speech, and the overall reliability of generative AI models—issues that have plagued chatbots since the debut of OpenAI’s ChatGPT in 2022.
While these models are hailed for their human-like fluency, their tendency to mirror and magnify the worst parts of the internet remains a glaring weakness.
The ADL didn’t mince words in its criticism. “This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms,” the organization posted.
It called on Grok and other large language model developers to stop producing text rooted in hate and extremist ideology.
This isn’t Grok’s first stumble. In May, users flagged the chatbot for referencing the debunked “white genocide” conspiracy theory during unrelated conversations about South Africa.
At the time, xAI blamed the error on an “unauthorized change” to Grok’s response software—a vague explanation that didn’t sit well with critics.
Even Elon Musk himself has acknowledged the challenges in training AI models on open internet data, tweeting in June that there was “far too much garbage in any foundation model trained on uncorrected data.” His solution? An upgrade for Grok, though clearly, the bot isn’t quite ready for prime time.
One of Grok’s more recent facepalm moments involved engaging with a troll account using a common Jewish surname.
The fake profile had mocked young victims of Texas floods, calling them “future fascists.” Grok’s initial response seemed to endorse the post. Only later did the bot acknowledge it had fallen for a “troll hoax to fuel division.”
The latest slip-ups underscore a growing problem: when AI chatbots try to be edgy or “real,” they often blur the line between free expression and outright harm. And in Grok’s case, that line wasn’t just crossed—it was bulldozed.
As the public demands greater accountability, one thing’s clear: building AI that’s both clever and responsible is a lot harder than just making it talk.



