X and xAI have a lot of explaining to do. If the IT Ministry is to be believed, explanations that the Musk-owned company has provided are in the same boat as its AI’s safeguards: ineffective and inadequate, if not completely useless. India and the European Union have become among the first entities to ask X and xAI to explain the widespread generation of CASM and non-consensual, explicit content that has been enabled by its proprietary, “truth-seeking” generative AI, Grok. Organizations such as Ofcom and others in the UK and EU have also formally asked for information regarding the phenomenon as well. 

X, formerly the micro-blogging site Twitter, has found itself the subject of a disgusting trend. Grok, through its image editing and generation capabilities, is being used to create non-consensual explicit content of men, women, and children. What is concerning about this phenomenon is that these images are not created thanks to some clever maneuvering around the guardrails that have been put in place by xAI, but simple prompts, which are sometimes not even complete sentences. 

While concerns have been raised about these developments among users, critics, and even governments, xAI and X have categorically failed to address this issue, despite assuring concerned parties that they are actively working on it. However, several tools within the company’s grasp are yet to be deployed to combat the problem. Several suggestions online have wondered why the image generation aspect of the tool has not been disabled outright, given that the “quick” fixes that have been promised have not been forthcoming. xAI and X have made no such attempt.

Elon Musk has held firm that those who are using the AI to generate illegal content must be held accountable. He tweeted out:

Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.

Despite Musk’s statement and assurances from X’s safety channels that the company is taking action, i.e., removing illegal content, permanently suspending accounts, and working with law enforcement, the reality on the ground appears to be chaotic. As of early January, regulators and watchdogs around the world have documented a continued stream of sexually explicit deepfakes, including images involving minors that qualify as child sexual abuse material under laws in multiple countries. In India, the Ministry of Electronics and Information Technology has been especially stern. After issuing a formal notice in early January calling Grok’s misuse a “serious failure of platform-level safeguards” and a violation of the dignity and privacy of women and children, the ministry gave X a strict deadline to submit a detailed action taken report and remove all obscene or unlawful content generated by the AI. Despite a response filed by X, the ministry has judged it detailed but inadequate and has asked for further specifics, including the technical measures put in place and how enforcement is being carried out. 

The Indian government’s directive goes beyond admonition. Officials have made clear that non-compliance with the Information Technology Act and Rules and the loss of safe harbor protections could expose X to legal liability. The ministry is now examining the company’s submission for a second time and is seeking additional details on what proactive steps have been taken to prevent the spread of harmful AI content in the future.

Regulators elsewhere have moved in parallel. In France, several ministers reported Grok’s sex-related content to prosecutors, labeling it “manifestly illegal” and requesting its immediate removal under French law and the European Union’s Digital Services Act. 

Across the EU more broadly, the European Commission has publicly called the generation of sexually explicit imagery involving children “illegal” and “appalling,” and stressed that such output “has no place in Europe.” National regulators in several member states have opened inquiries, demanded takedowns, and even threatened enforcement actions with fines under the bloc’s digital safety frameworks. 

The United Kingdom has joined the chorus. New online safety rules that require tech companies to block unsolicited nude images have come into force, and the UK’s media regulator Ofcom has contacted X regarding compliance. A leading UK child safety watchdog, the Internet Watch Foundation, reported that criminals were using Grok to generate sexualised imagery of minors on dark web forums, a finding that has heightened parliamentary scrutiny and could trigger enforcement under British law. Some official UK bodies have even stopped using X in protest. 

Despite this widening crackdown, the trend remains active online. Users continue to post prompts that Grok complies with, generating non-consensual obscene content which can then be further edited and shared publicly. Even with reported tighter filters and new guardrails being introduced, the sheer volume of harmful imagery and the persistence of demand mean that enforcement lags well behind production. 

Part of the difficulty seems to lie in the very architecture and ethos of Grok’s design. Unlike competitors whose safety filters are deeply baked into model training and output controls, Grok’s safeguards have been criticized as superficial and reactive, with policy updates that sometimes seem to shift responsibility to users rather than prevent harmful generation at the source. Some industry experts also point to broader staffing and moderation challenges at X as exacerbating the issue.

The controversy has also reignited broader debates over how advanced generative AI should be governed. As more powerful models become capable of producing lifelike visual content with ease, the challenge of distinguishing between creative freedom and harmful misuse becomes a regulatory fault line. Critics argue that without proactive, built-in protections and transparent auditing mechanisms, tools like Grok will continue to be weaponised in ways that existing legal frameworks struggle to keep pace with.

Also check:- Countries Leading in AI Adoption: How the World Is Shaping Its Digital Future

What remains clear is that the Grok AI saga is far from over. With ongoing investigations across multiple jurisdictions, mounting legal pressure, and communities calling for stricter enforcement, the xAI project, once pitched as a bold frontier of AI creativity, now stands at the center of multiple investigations that could tarnish its image irrecoverably. However, if recent trends are to be believed, this is all going to go away when another scandal inevitably breaks onto the scene. The people who have been affected by these generations might never receive the justice they deserve, given that the generated images can also be downloaded by the very people who have made it, which could be indicative of the long term social, mental, and emotional harm that lies in the fallout of this trend, which xAi has repeatedly failed to curb. 

Sources

26 Views