Social media should take responsibility for the content they publish: Ashwini Vaishnaw


Social media firms should take responsibility for the content they publish, and a standing committee has already recommended a tough law to fix accountability of platforms, Union Minister Ashwini Vaishnaw said on Friday.

The Centre has earlier this week warned online platforms—mainly social media firms—of legal consequences if they fail to act on obscene, vulgar, pornographic, paedophilic, and other forms of unlawful content.

“Social media should be responsible for the content they publish. Intervention is required,” Vaishnaw said on the sidelines of a Ministry of Electronics and IT (MeitY) event. He was replying to a question on the AI app Grok generating indecent and vulgar images of women.

Rajya Sabha Member Priyanka Chaturvedi has also written to the minister seeking urgent intervention on increasing incidents of AI apps being misused to create vulgar photos of women and post them on social media.

“The standing committee has recommended that there is a need to come up with a tough law to make social media accountable for the content they publish,” Vaishnaw said.

The Parliamentary Standing Committee on the Ministry of Information and Broadcasting has recommended that the government make social media and intermediary platforms more accountable with respect to peddling fake content and news.

The committee has endorsed the view of stakeholders, such as enforcing transparency in algorithms, introducing stricter fines and penalties for repeat offenders, establishing an independent regulatory body, using technological tools like AI to curb the spread of misinformation, etc.

On December 29, MeitY asked social media firms to immediately review their compliance framework and act against obscene and unlawful content on their platform, failing which they may face prosecution under the law of the land.

The advisory followed MeitY noticing that social media platforms have not been strictly acting on obscene, vulgar, inappropriate, and unlawful content.

Public policy firm IGAP Partner Dhruv Garg said MeitY advisory to intermediaries does not establish any fresh legal obligations; it rather reiterates that safe harbour protection hinges entirely on strict adherence to due diligence requirements laid out in the IT Rules, 2021.

“Significant social media intermediaries are subject to stricter due diligence benchmarks. They must also deploy automated content moderation tools. The advisory signals that in light of widespread obscene content circulation, reactive content takedowns are inadequate, and platforms must actively fulfil their legal responsibilities or they may face criminal prosecution,” he said.

Luthra and Luthra Law Offices India, Senior Partner, Sanjeev Kumar, said MeitY’s advisory unequivocally states that non-compliance with the IT Act and the IT Rules, 2021 may result in legal consequences, including prosecution under the IT Act, the Bharatiya Nyaya Sanhita, 2023 (BNS), and other applicable criminal laws, and such consequences may extend to intermediaries, platforms, and their users.

“This operates alongside the potential loss of safe-harbour protection under Section 79, exposing non-compliant entities to direct liability. The cumulative impact of these provisions heightens legal, financial, and reputational risk, making adherence not only a statutory duty but a business imperative,” he said.

India has been tightening oversight of digital platforms as social media use has expanded rapidly across the country, bringing concerns around misinformation, harmful content, online abuse and deepfake imagery into sharper focus. With hundreds of millions of users now active on global platforms such as Meta, Google and X, policymakers have increasingly argued that platform scale and algorithmic amplification warrant higher standards of accountability than those applied to traditional intermediaries.

The debate also reflects a broader shift in India’s approach to internet regulation, moving from a largely self-regulatory framework toward enforceable obligations backed by penalties.

As artificial intelligence tools make the creation and spread of synthetic and harmful content easier, the government and parliamentary panels have signalled that existing safeguards may need to be strengthened to protect users—particularly women and children—while balancing concerns around free expression and innovation in the digital economy.

(With inputs from PTI)



Source link


Discover more from News Link360

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from News Link360

Subscribe now to keep reading and get access to the full archive.

Continue reading