Elon Musk’s AI chatbot Grok appeared to experience a bug on Wednesday that caused it to reply to dozens of posts on X with information about “white genocide” in South Africa, even when the user didn’t ask anything about the subject.
The strange responses stem from the X account for Grok, which replies to users with AI-generated posts whenever a user tags @grok. When asked about unrelated topics, Grok repeatedly told users about a “white genocide,” as well as the anti-apartheid chant “kill the boer.”
Grok’s odd, unrelated replies are a reminder that AI chatbots are still a nascent technology, and may not always a reliable source for information. In recent months, AI model providers have struggled to moderate the responses of their AI chatbots, which have led to odd behaviors.
OpenAI recently was forced to roll back an update to ChatGPT that caused the AI chatbot to be overly sycophantic. Meanwhile, Google has faced problems with its Gemini chatbot refusing to answer, or giving misinformation, around political topics.
In one example of Grok’s misbehavior, a user asked Grok about a professional baseball player’s salary, and Grok responded that “The claim of ‘white genocide’ in South Africa is highly debated.”
Several users posted on X about their confusing, odd interactions with the Grok AI chatbot on Wednesday.
It’s unclear at this time what the cause of Grok’s odd answers are, but xAI’s chatbots have been manipulated in the past.
In February, Grok 3 appeared to have briefly censored unflattering mentions of Elon Musk and Donald Trump. At the time, xAI engineering lead Igor Babuschkin seemed to confirm that Grok was briefly instructed to do so, though the company quickly reversed the instruction after the backlash drew greater attention.
Whatever the cause of the bug may have been, Grok appears to be responding more normally to users now. A spokesperson for xAI did not immediately respond to TechCrunch’s request for comment.
Keep reading the article on Tech Crunch
Elon Musk’s AI is obsessed with the South African conspiracy theory.
Elon Musk’s AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project.
xAI isn’t exactly known for its strong commitments to AI safety as it’s commonly understood. A recent report found that the company’s AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of.
Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company’s approach to AI safety. The eight-page document laid out xAI’s safety priorities and philosophy, including the company’s benchmarking protocols and AI model deployment considerations.
As The Midas Project noted in the blog post on Tuesday, however, the draft only applied to unspecified future AI models “not currently in development.” Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit.
In the draft, xAI said that it planned to release a revised version of its safety policy “within three months” — by May 10. The deadline came and went without acknowledgement on xAI’s official channels.
Despite Musk’s frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its “very weak” risk management practices.
That’s not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever.
Keep reading the article on Tech Crunch