
Muhammed Selim Korkutata | Anadolu | Getty images
In the more than two years elapsed since generative artificial intelligence, the world by assault, after the public launch of Chatgpt, trust has been a perpetual problem.
Hallucinations, bad mathematics and cultural biases have affected the results, reminding users that there is a limit of how much we can trust AI, at least for now.
Elon Musk Grok Chatbot, created by his Startup XAI, this week, there is a deeper reason for the group: AI can be easy manipulated by humans.
Grok on Wednesday began responding to user consultations with false claims of “White Genocide” in South Africa. At the end of the day, screen catches were published in X or similar answers, even when the questions had nothing to do with the subject.
After remaining silent on the matter for about 24 hours, Xai said Thorsday late that Grok’s strange behavior was caused by the “unauthorized modification” to the so -called indications of the chat application system. In other words, humans were dictating the AI response.
The nature of the response, in this case, is directly linked to Musk, which was born and grew in South Africa. Musk, owner of XAI, in addition to his CEO roles in Tesla and Spacex has been promoting the false statement that violence against some South African farmers constitutes “white genocide”, a feeling that President Donald Trump also expressed.
“I think it is incredible important due to the content and who leads this company, and the ways in which it suggests or sheds light on the child of power that these tools are to shape the thought and understanding of the world of the world,” “” “” “” Berkeley and an expert in AI governance.
Mulligan characterized the Grok Miscue as an “algorithmic breakdown” that “separates the supposed neutral nature of large language models from the seams. She said there is no reason to see Grok’s malfunction as simply an “exception.”
Chatbots with AI created by Goal” Google And Openi is not “packaging the information in a neutral way, but is adjusting the data through a” set of filters and values that are based on the system, “said Mulligan. Grok’s breakdown sacrifices a window of the ease with which any of these systems can be altered to meet the agenda of an individual or group.
Representatives of XAI, Google and Openai did not respond to requests for comments. Goal declined to comment.
Different from adjustment problems
Grok’s unauthorized alteration, XAI said, in his statement, violated the “internal policies and central values.” The company said it would take measures to avoid similar disasters and publish the indications of the application system to “strengthen its confidence in Grok as an AI of truth.”
It is not the first mistake to become viral online. A decade ago, Google’s application failed African Americans as gorillas. Last year, Google temporarily arrested its gemini ai ai image generation function after admitting that it was offering “inaccuracies” in historical images. And some users accused the users of the Openai image generator to show signs of bias in 2022, which led the company to announce that it was implementing a new technique so that the images “precisely reflect the diversity of the world’s population.”
In 2023, 58% of AI decision makers in companies in Australia, the United Kingdom and the United States expressed concern about the risk of hallucinations in a generative deployment of AI, according to Forrester. The survey in September of that year included 258 respondents.

The experts told CNBC that Grok’s incident reminds Deepseek of China, which became a night sensation in the United States earlier this year due to the quality of its new model and that, according to reports, it was built at a fraction of the cost of its US rivals.
Critics have said that the issues of Deepseek censors are considered sensitive to the Chinese government. Like China with Depseek, Musk seems to be influencing the results based on their political views, they say.
When Xai debuted to Grok in November 2023, Musk said he was destined to have “a little white”, “a rebel streak” and answer the “spicy questions” that competitors could dodge. In February, XAI blamed an engineer for the changes that suppressed Grok’s answers to users’ questions about erroneous information, maintaining the names of Musk and Trump out of the answers.
But Grok’s recent obsession with the “white genocide” in South Africa is more extreme.
Petar Tsankov, CEO or the AiTeflow Ai Ai models audit firm said that Grok’s explosion is more surprising than what we saw with Depseek Bald “child or would expect a child or manipulation of China.”
Tsankov, whose company is based in Switzerland, said the industry needs more transparency so that users can better understand how companies build and train their models and how that influences behavior. He observed EU’s efforts to demand more technological companies to provide transparency as part of broader regulations in the region.
Without a public protest, “we can never implement safer models,” Tsankov said, and they will be “people who will pay the price” to put their trust in the companies that develop them.
Mike Galtieri, Forrester analyst, said that the Grok debacle will unleash the growth of users for chatbots, or decreases the investments that combines are reaching technology. He said users have a certain level of acceptance for this type of occurrence.
“Either Grok, Chatgpt or Gemini, everyone awaits it now,” Galtieri said. “They have been roof from the leg how the models are hallucinated. There is an expectation that this happens.”
Olivia Gambelin, ethics of AI and author of the book Responsible AI, published last year, said that although this type of Grok activity cannot be Beps, it underlines a fundamental failure in the AI models.
Gambelin said that “it shows that it is possible, at least with Grok models, adjust these fundamental models of general purpose at will.”
– Lora Kolodny and Salvador Rodríguez de CNBC contributed to this report
Look: XAI Chatbot Grok by Elon Musk brings South African statements of ‘White Genocide’.
