Govt directs platforms to label AI content, deploy checks on misuse | Tech News
The government on Tuesday directed social media platforms to put in place systems to identify and regulate artificial intelligence-generated content. The order required platforms to use automated tools to prevent material that is illegal, sexually exploitative or misleading.
In a notification issued by the Ministry of Electronics and Information Technology, the government asked platforms to ensure that AI-generated material was clearly labelled and carried identifiers indicating that it was synthetically created.
The amendments came under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, and are set to take effect from February 20, 2026.
Definition of synthetically generated information
The amendments introduced a formal definition of “synthetically generated information”. The notification stated that the term referred to audio, visual or audio-visual content created or altered using computer tools in a way that may appear real.
“‘Synthetically generated information’ means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true,” the notification stated.
The rules clarified that routine editing, formatting, enhancement, translation or accessibility improvements would not be treated as synthetic content if they did not change the meaning or context of the original material.
Automated checks and due diligence
The amendments required intermediaries that enable the creation or sharing of such content to deploy “reasonable and appropriate technical measures, including automated tools or other suitable mechanisms” to prevent unlawful material from being generated or shared.
The notification stated that prohibited material included content that violated any law, contained child sexual abuse material, non-consensual imagery, false documents or content that deceptively misrepresented a person or event.
Platforms were required to take prompt action once they became aware of violations. This could include removing content, disabling access, suspending user accounts, or sharing user details with authorities where required under law.
Labelling and metadata requirements
The rules required synthetic content that was not illegal to be clearly labelled. The govt asked the platforms to ensure that such content carried visible labels and embedded identifiers.
The notification stated that such material must be “prominently labelled” and embedded with permanent metadata or other identifiers, including a unique identifier, to indicate that it had been generated using computer tools.
It also specified that intermediaries must not allow the removal, alteration or suppression of these labels or metadata once applied.
User declarations and platform responsibility
Significant social media intermediaries were required to obtain a declaration from users stating whether uploaded content was synthetically generated. Platforms were also asked to verify such declarations using technical measures before allowing publication.
If an intermediary knowingly allowed such content to be published in violation of the rules, it could be treated as failing to exercise due diligence under the law.
The amendments also required platforms to inform users at least once every three months about rules, penalties and consequences for violations.
Legal scope and enforcement
The notification further stated that references to “information” used in unlawful acts would include synthetically generated information. It also stated that removing such content in compliance with the rules would not violate safe-harbour protections for intermediaries.
Users responsible for creating or sharing unlawful synthetic content could face penalties under the Information Technology Act and other applicable laws.
Discover more from News Link360
Subscribe to get the latest posts sent to your email.
