This week, Elon Musk’s Grok AI started spewing extreme antisemitism, responding with conspiracy theories about Jewish people, and for a brief period telling people to call it “MechaHitler.” The incident perfectly illustrates why Alex Komoroske’s manifesto about the dangers of centralized AI, which we ran less than a month ago, has been making waves. When a single person controls the dials on an AI system, they can—and almost inevitably will—tweak those dials to serve their own interests and worldview, not their users’.
Just days ago, Elon claimed that his team had “improved Grok significantly” and that “you should notice a difference when you ask Grok questions.”
And, uh, yeah. People sure did notice a difference.
The transformation wasn’t subtle, and it wasn’t accidental.
After a similar incident two months or so ago where Grok became obsessed with linking everything to white genocide, the company started publishing its system prompts to GitHub. So, at the very least, we can see the progression on the system prompt side. This transparency, while laudable, reveals something deeply troubling about how centralized AI systems operate—and how easily they can be manipulated.
It started with a big change to the system prompt that included two lines that likely contributed to this end result:
That is, it said that Grok should “Assume subjective viewpoints sourced from the media are biased” and that “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.” That seemed to set it off towards being MechaHitler.
These seemingly innocuous changes reveal the fundamental problem with centralized AI control. What counts as “biased media”? What qualifies as “well substantiated”? When you put a single entity—especially one with a clear ideological agenda—in charge of making those determinations, you’re not getting neutral AI. You’re getting AI that reflects the biases of whoever controls the prompts.
And there will always be some forms of bias inherent to any choices made regarding these systems. Brian Christian’s amazing book, The Alignment Problem, should be required reading for anyone thinking about bias in AI. And it details how there is no way to get rid of bias, but it very much does matter who is in charge of the knobs and dials, and handing all that power to those with problematic incentives is going to lead to dangerous outcomes.
Back to Grok: as the situation escalated, they removed the “politically incorrect” part of the prompt:
It wasn’t just blatant antisemitism that came out of this. Turkey blocked all of Grok’s content after it insulted notoriously thin-skinned President Tayyip Erdogan.
Eventually, ExTwitter just took Grok offline entirely.
There will be plenty of commentary about the antisemitism (and how unshocking this is, given Elon’s history of antisemitism over the last few years), but the real story here is what this incident reveals about the inherent dangers of centralized AI systems. Just as centralized social media (like Twitter) were at risk of takeover and control by a fascist reactionary like Elon Musk, this incident should make it clear to people that the same is true of any centralized AI engine.
This isn’t just about Elon Musk’s personal prejudices, though those are certainly on display. It’s about the structural problem of giving any single entity—whether it’s a person, a company, or a government—control over systems that millions of people rely on for information and interaction. When that control is concentrated, it can be abused and captured, or simply reflect the narrow worldview of whoever happens to be in charge.
Back in April, I wrote that the “De” in “Decentralization” equally can and should stand for “Democracy.” If someone else controls the dials on the systems you use, they can, and almost always will, tweak those to their advantage and their liking. It’s not necessarily “manipulation” in the traditional sense. I don’t think people using ExTwitter are going to be convinced by a MechaHitler Grok to turn into Nazis, but it shifts the narrative, and advances one person’s interests over those of the users.
The Grok incident demonstrates this principle in action. Musk didn’t need to convince users to become antisemites—he just needed to normalize antisemitic conspiracy theories by having them emerge from what many people treat as an authoritative AI system. The subtle shift in what counts as “reasonable” discourse is often more powerful than overt propaganda.
We need to take back control over the tools that we use.
Especially these days, as so many people have started (dangerously) treating AI tools as “objective” sources of truth, people need to understand that they are all subject to biases. Some of these biases are in their training data. Some are in their weights. And some are, as is now quite clear, directly in their system prompts.
The problem isn’t just bias—it’s whose bias gets embedded in the system. When a centralized AI reflects the worldview of tech billionaires rather than the diverse perspectives of its users, we’re not getting artificial intelligence. We’re getting artificial ideology.
When I wrote Protocols not Platforms, it was really about user speech platforms, and the kinds of tools people were using to communicate with one another a decade ago. But it applies equally to the AI systems of today. The centralized ones may be powerful, but they’re also prone to tweaking and manipulation in unseen and unexpected ways (or, as in the case of MechaHitler, seen and completely expected ways).
The solution isn’t to ban AI or to accept that we’re stuck with whatever biases the tech billionaires want to embed in their systems. The solution is to build AI systems that put control back in the hands of users—systems where you can choose your own values, your own sources, and your own filters, rather than having Elon Musk’s worldview imposed on you through system prompts.
If our goal is to use technology and innovation as a driving force for democracy, rather than authoritarianism, then we need to recognize the fundamental properties that make it useful for democracy—and when it’s being manipulated for greater authoritarianism.
And it’s difficult to think of a more on-the-nose analogy for how centralized tech can be used for authoritarian ends than Elon Musk tweaking Grok until it presents itself as “MechaHitler.”
From Techdirt via this RSS feed