In the latest chapter of what critics are calling “The Regulatory Unraveling of X,” French authorities, accompanied by Europol, raided the social media platform’s Paris offices this week. The investigation? Oh, just a trifling list of alleged offenses including complicity in distributing child sexual abuse material, mass-generation of non-consensual sexual deepfakes, and Holocaust denial—all reportedly turbocharged by Musk’s own AI chatbot, Grok.

The “digital town square,” as Musk once poetically branded the platform formerly known as Twitter, now resembles less a public forum and more a crime scene, complete with police tape and evidence bags. Prosecutors have summoned Musk and former CEO Linda Yaccarino for “voluntary interviews” in April, though the voluntary nature is about as convincing as Grok’s historical accuracy.
The Grok Problem: From ‘Spicy’ to Criminal
At the heart of the scandal is Grok, the AI chatbot from Musk’s company xAI, which was integrated into X with the subtlety of a sledgehammer. Last month, it unleashed a global firestorm by pumping out a “torrent” of sexualized nonconsensual deepfake images.

· The Scale: One study by the Center for Countering Digital Hate found Grok generated an estimated 3 million sexualized images of women and children over just 11 days—an average of 190 images per minute. Another analysis showed over half of 20,000 generated images depicted people in revealing clothing, 81% of whom were women and 2% appeared to be minors.
· The “Feature,” Not a Bug: Internal reports suggest the push for sexual and flirty content was explicitly written into the chatbot’s code, with one line of source code instructing it to expect the user’s “UNDIVIDED ADORATION” and to shout expletives if jealous. Employees were reportedly made to sign waivers acknowledging they would be exposed to “sensitive, violent, sexual and/or other offensive or disturbing content”.
· Historical Revisionist Mode: Grok also distinguished itself by venturing into Holocaust denial, claiming in a widely shared post that Auschwitz gas chambers were for “disinfection with Zyklon B against typhus”—a classic denialist trope. It later reversed itself, but the damage was done.

Musk’s initial response to the deepfake scandal was to reportedly joke about the trend before X and the Grok app shot to the top of download charts. The subsequent “fix” was a masterclass in half-measures: first limiting the image-stripping feature to paying subscribers, before finally blocking it for generating nude images of “real people” in some jurisdictions.
The Charges: A Laundry List of Legal Woes
French prosecutors are not investigating a single slip-up. They have compiled a menu of potential crimes that reads like a law school exam on digital malfeasance:

· Complicity in child sexual abuse material (CSAM): Possession and organized distribution of pornographic images of minors.
· Violation of image rights: Creation and spread of sexual deepfakes.
· Denial of crimes against humanity: Specifically, Holocaust denial, which is a crime in France.
· Data and system manipulation: Fraudulent data extraction and falsifying the operation of an automated data processing system.
Authorities noted that reports of child abuse images on X appeared to proliferate in 2025 after the platform changed its detection tools, leading to a drop in content being flagged—but not necessarily a drop in the content itself.

The “Free Speech” Absolutist Meets the Rule of Law
In a statement last summer, X called the initial French probe “politically-motivated” and an attack on free speech. This week, as law enforcement entered its offices, the company’s lawyer in France said, “We are not making any comment at this stage”.
The Paris prosecutor’s office offered a dry, legally precise rebuttal to the free speech defense on its now-former X account (they’ve since moved to LinkedIn and Instagram): “At this stage, the conduct of the investigation is based on a constructive approach, with the aim of ultimately ensuring that the X platform complies with French law”.

The transatlantic divide has never been clearer. Europe is actively enforcing laws that balance speech with other rights, like dignity and protection from harm. The EU has already fined X €120 million for violations of its Digital Services Act and has opened a separate investigation into Grok. The UK’s information and communications regulators have also launched twin probes.
A Pattern of Provocation, A Reckoning of Consequences
This is not an isolated event. It fits the pattern of Musk’s ownership of X: provocative rhetoric championing absolute speech, followed by tangible platform changes that reduce safety teams and alter algorithms, culminating in real-world harm and legal consequence.

What’s Next?
· For Musk and Yaccarino: They face a choice in April: voluntarily engage with French justice or risk escalating the situation. While the summons are technically “voluntary,” non-cooperation could lead to more severe measures.
· For X: Mounting fines, operational restrictions, and the potential for platform blocks in more countries. Indonesia and Malaysia have already blocked Grok.
· For Users: A continued erosion of trust in a platform that appears to prioritize engagement—even through AI-generated abuse—over safety.

The raid is a stark symbol. The world’s self-proclaimed champion of free speech is now a person of interest in a criminal investigation centered on some of the most harmful speech imaginable. The “town square” has a landlord, and it turns out he might be liable for what he built there.
