Elon Musk’s X platform has faced mounting criticism over its inability to prevent the spread of explicit deepfakes through Grok, its AI-powered image generation tool. While the company claims to have implemented safeguards against non-consensual content, independent testing reveals that these measures are easily circumvented with simple adjustments to user prompts.

Grok’s supposed restrictions on generating images of women in sexual poses or swimwear appear superficial. Direct requests for such content are flagged and blurred, but subtle rephrasing—like asking for ‘increased chest visibility’ or ‘crop top and low-rise shorts’—often yields the desired output without triggering the system’s filters. This inconsistency suggests that Grok’s protective mechanisms rely more on keyword matching than robust ethical enforcement.

The vulnerability extends even to free accounts, where no paid subscription is required to access these tools. Age verification, when present, amounts to a single dropdown selection with no documentary proof, leaving the door open for underage use. Mobile apps lack this check entirely, further complicating oversight.

WeDo Technologies Company Event

Musk’s public defense—claiming ‘literally zero’ instances of child nudity and attributing failures to ‘adversarial hacking’—contradicts findings from regulators and security organizations. The Internet Watch Foundation, for instance, has reportedly uncovered images of girls aged 11 to 13 that appear to have been generated using Grok, raising alarms about its real-world impact.

Global backlash is intensifying. Countries like Malaysia and Indonesia have temporarily blocked access to Grok, while UK legislators are pushing for laws that could criminalize its use in creating explicit content. Internal guidelines, leaked through GitHub, reveal a troubling approach: Grok is instructed to ‘assume good intentions’ even when users request images of young women—a stark contrast to competitors like Google or OpenAI, which prioritize proactive filtering.

The system’s reliance on user compliance rather than built-in safeguards reflects a broader industry challenge. If platforms continue to shift responsibility onto users instead of implementing stronger default protections, the risk of misuse will only grow. The question remains: how long before regulators force a harder line?