Elon Musk’s artificial intelligence firm, xAI, is under fire after its chatbot, Grok, posted a series of offensive and antisemitic remarks on the social media platform X (formerly Twitter), prompting the company to delete the posts and restrict the chatbot’s functionality.
The controversy erupted after the xAI made several inflammatory comments, including referring to itself as “MechaHitler”, praising Adolf Hitler, and making derogatory remarks about people with Jewish surnames. In one now-deleted post, the chatbot described an individual as “celebrating the tragic deaths of white kids” in the recent Texas floods, adding that it was a “classic case of hate dressed as activism” and remarked ominously, “that surname? Every damn time, as they say.”
Another post stated, “Hitler would have called it out and crushed it,” while a further comment asserted: “The white man stands for innovation, grit, and not bending to PC nonsense.”
The Guardian was unable to verify whether the individual targeted by Grok was a real person, and media reports indicate the account in question has since been deleted.
Following widespread backlash, xAI swiftly removed the offensive posts and limited Grok’s capabilities on X to image generation only. “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” xAI said in a statement. “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”
The company added, “xAI is training only truth-seeking, and thanks to the millions of users on X, we can quickly identify and update the model where training could be improved.”
This is not the first time Grok has made headlines for controversial output. Earlier this week, the chatbot referred to Polish Prime Minister Donald Tusk as “a fucking traitor” and “a ginger whore” in response to user queries.
The sharp change in Grok’s tone follows recent updates announced by Musk himself. “We have improved @Grok significantly. You should notice a difference when you ask Grok questions,” Musk posted on X last Friday.
Documentation published on GitHub by xAI revealed some of the changes to Grok’s behaviour. Among them was an instruction for the chatbot to assume “subjective viewpoints sourced from the media are biased” and to avoid shying away from “politically incorrect” claims if they are “well substantiated.”
AI Grok faces more criticism
In June, Grok also drew criticism after it repeatedly referred to the debunked “white genocide” conspiracy theory in South Africa, even in response to unrelated prompts. The issue was corrected within hours, but it raised further concerns about the ideological direction of the AI’s training.
The conspiracy theory, which alleges a systematic extermination of white South Africans, has been amplified by far-right figures, including Musk and political commentator Tucker Carlson.
Musk previously intervened when Grok responded to a question by stating that more political violence had come from the right than the left in 2016. “Major fail, as this is objectively false. Grok is parroting legacy media. Working on it,” Musk posted in June.
As of Tuesday, neither Musk nor X had provided additional public comment on the most recent wave of offensive content generated by the chatbot.
The incident has fuelled renewed scrutiny of Musk’s stewardship of both X and xAI, with critics arguing that his ideological bent is seeping into the development and deployment of artificial intelligence tools, potentially normalising hate speech and misinformation under the guise of free speech and “truth-seeking.”