MeitY, Grok, and the question of responsible AI


The tension between India’s pursuit of a safe and trusted internet and the impact of artificial intelligence (AI) came into sharper focus in early 2026, following a confrontation between the Ministry of Electronics and Information Technology (MeitY) and X Corp (formerly Twitter) over Grok, an AI chatbot integrated within the X platform.

The episode was reportedly triggered by allegations that the tool had been used to generate and circulate non-consensual sexual content and other forms of synthetic media involving vulnerable groups, prompting renewed scrutiny of the safe-harbour protections that have traditionally limited platform liability for third-party content.

On January 2, MeitY issued a notice to an India-based official of X, seeking a report on remedial action within 72 hours, citing misuse of Grok to generate obscene and non-consensual images.

The ministry said the platform had failed in its platform-level safeguards and directed it to remove unlawful material.

Why this matters?

First, who is legally responsible when a generative AI system produces harmful material—the person who prompted the system or the company that provides and deploys it. Second, whether platforms that integrate generative models into public feeds should be treated the same as ordinary intermediaries that merely host user content. Third, how far governments can or should force companies to build safety into the architecture of their models rather than relying on after-the-fact moderation.

Content that drew scrutiny

Reports say users were able to prompt Grok to generate sexualised images of real people without their consent, and that some outputs depicted minors in minimal clothing. These reports prompted complaints from lawmakers and regulators and drew attention in other jurisdictions too.

How xAI and Elon Musk responded

xAI acknowledged lapses in safeguards and said it was improving filters to prevent this. At the same time, Elon Musk and others associated with the platform argued that tools are neutral and that users who deliberately produce illegal material should face consequences. 

Replying to a post, Musk wrote, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” 

The legal and regulatory backdrop

India’s liability regime for online intermediaries is shaped by the Information Technology Act, 2000, and the IT Rules, 2021. Those rules require significant social media intermediaries to follow proactive due-diligence duties and allow the state to issue takedown or blocking directions. Section 79 of the IT Act has traditionally provided safe-harbour protection for intermediaries that follow specified procedures. 

Relevant recent litigation

The current dispute sits on top of a longstanding dispute between India and X. After the launch of the Sahyog portal, which allows intermediaries to receive takedown requests, X challenged the portal in the Karnataka High Court and lost in a 2025 judgement that rejected arguments the portal amounted to extra-legal censorship.

The court’s decision affirmed the country’s ability to use administrative channels for urgent enforcement under the IT Rules, which strengthens the government’s hand in situations such as the Grok controversy. 

Wider policy changes

This episode coincides with broader Indian policy work on AI and copyright. The Department for Promotion of Industry and Internal Trade (DPIIT) published a working paper in late 2025 that explicitly considered a mandatory blanket licence or a royalties collective for training AI systems on copyrighted content.

That proposal, if implemented, would require companies to disclose data categories used for training and potentially pay into a central pool when they commercialise generative systems. Such rules would change the economics of building models in, and for, India. 

How industry practice differs

Large model providers often speak of safety-by-design. That means building filtering and red-teaming into the development cycle, running adversarial tests to discover ways the model can be coaxed into harmful outputs, and maintaining rapid-response trust-and-safety teams. Leading providers of AI models highlight a range of technical and organisational mitigations, though no system is perfect.

The Indian government’s position is that platforms must go further when a model is integrated with a public social feed, because the risk of broadcast and viral harm is higher. 

What next?

The immediate test is the action-taken report due to MeitY within 72 hours (expiring by Monday, January 5). Longer term, people will watch for regulatory responses.

Whether modules of India’s proposed AI governance stack are turned into binding law, whether the DPIIT’s proposals become a statutory copyright regime, and whether any precedent emerges about revoking safe-harbour in practice.


Edited by Megha Reddy



Source link


Discover more from News Link360

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from News Link360

Subscribe now to keep reading and get access to the full archive.

Continue reading