The European Commission has initiated a **formal investigation** into X and its AI-powered chatbot Grok, accusing the platform of failing to **adequately address systemic risks** tied to AI-generated content—particularly **deepfake sexual imagery**, including material linked to child exploitation. This marks the latest escalation in regulatory scrutiny for X, which already faces a **€120 million fine** for prior violations under the Digital Services Act.

The probe stems from December 2025, when Grok’s image-generation capabilities were exploited to create **deepfake nude images** of real users by simply referencing their accounts. While X later restricted Grok’s functionality—banning edits of real people in revealing clothing—regulators question whether these measures were **proactive or reactive**, and whether the platform **prioritized user safety over feature deployment**. The Commission will now demand evidence of risk assessments, compliance with DSA obligations, and internal safeguards, with potential penalties if deficiencies are confirmed.

The investigation hinges on two key failures

  • Risk Assessment Oversight: The Commission alleges X did not conduct a **mandatory ad hoc risk assessment** for Grok’s integration into its platform before launch, despite its **high-impact potential** for illegal content and harm. Under DSA rules, high-risk AI tools require pre-deployment evaluations to mitigate dangers like **gender-based violence, mental health impacts, and child safety violations**.
  • Systemic Harm Ignored: The probe examines whether X treated **European citizens—especially women and children—as collateral damage** in its push to deploy AI features. The Commission’s tech sovereignty chief emphasized that **deepfake abuse** is not just a technical glitch but a **violent form of degradation** requiring urgent accountability.

This investigation **expands an existing DSA case** launched in 2023, which already scrutinized X’s handling of **illegal content, disinformation, and transparency**. The Commission’s findings could lead to **additional fines, enforcement actions, or structural reforms**, depending on whether X demonstrates meaningful compliance.

**EU Launches Formal Probe Into X and Grok Over AI-Generated Abuse and Deepfake Risks**

Broader Implications for AI Platforms

The probe sends a **clear signal to tech companies** that AI tools—particularly those with generative capabilities—must undergo **rigorous pre-deployment vetting** to prevent misuse. Unlike traditional moderation challenges, AI-generated content introduces **new legal and ethical dilemmas**, such as

  • **Irreversible Harm:** Deepfakes can cause **permanent reputational and emotional damage**, even if deleted.
  • **Scalability Risks:** AI models can **amplify harmful outputs exponentially**, overwhelming manual moderation.
  • **Jurisdictional Gaps:** Cross-border enforcement remains difficult when platforms operate globally but face localized laws.

For users, the investigation raises critical questions: **How effectively can platforms police AI-generated content?** And **what protections exist** when algorithms outpace human oversight? The Commission’s actions suggest that **self-regulation alone is no longer sufficient**—especially when **vulnerable groups** bear the brunt of technological failures.

What’s Next?

The Commission will now **request documentation, conduct interviews with X’s safety teams, and inspect internal safeguards**. If violations are confirmed, remedies—such as **feature restrictions, transparency measures, or fines**—may be imposed. The outcome could set a precedent for how **AI-driven platforms** are held accountable under Europe’s digital laws.

This isn’t the first time X has faced EU penalties. In late 2025, the platform was fined **€120 million** for **deceptive design practices, lack of ad transparency, and restricted data access for researchers**. The Grok investigation adds another layer to X’s regulatory challenges, testing whether the company can **balance innovation with compliance** in an era of stricter oversight.