The Indian government has introduced stricter regulations to curb the misuse of artificial intelligence and deepfake content online. Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, social media platforms must now label AI-generated content clearly and remove certain categories of harmful or unlawful posts within three hours.
The revised framework makes it compulsory for platforms to ensure that any content created or modified using AI tools is clearly and prominently identified. Users will also be required to declare whether the content they upload has been generated or altered using artificial intelligence.
According to the government, the updated rules are designed to check the growing misuse of AI technologies and deepfakes while ensuring faster action against misleading, harmful or objectionable content circulating online.
The amendments were officially notified on February 10 and will come into force from February 20. The changes formally bring what the government refers to as “synthetically generated information” (SGI) under India’s digital governance regime.
SGI includes AI-generated or AI-altered audio, video and visual material that appears real and may be difficult for users to distinguish from authentic content. The move follows rising concerns about deepfakes, impersonation, misinformation, online fraud, harassment and other unlawful activities involving synthetic media.
A key provision of the updated rules is mandatory labelling. Social media platforms and digital intermediaries that enable the creation or sharing of synthetic content must ensure such material is labelled clearly, prominently and unambiguously as AI-generated.
In addition, platforms are required to embed persistent metadata or technical provenance markers such as unique identifiers. These markers are intended to help trace AI-generated content back to its originating platform or system.
Importantly, intermediaries are barred from allowing these labels or metadata markers to be removed or tampered with, strengthening traceability and accountability.
To enforce compliance, platforms must obtain user declarations at the time of content upload, asking whether the material has been synthetically generated or altered using AI tools.
They are also expected to implement reasonable and proportionate technical safeguards, including automated systems, to verify the accuracy of such declarations. The rules make it clear that failure to exercise due diligence in labelling and verification could expose intermediaries to liability under the amended framework.
The government has also tightened deadlines for content moderation. In specific cases, social media platforms are now required to act on lawful government orders or user complaints within three hours, significantly reduced from the earlier 36-hour window.
Other timelines have also been shortened. Certain response periods have been reduced from 15 days to seven days, while others have been cut from 24 hours to 12 hours, based on the nature and severity of the violation.
The amendments also clarify that AI-generated material used for unlawful purposes will be treated the same as any other illegal content under Indian law.
Platforms must prevent their services from being used to create or distribute synthetic content involving child sexual abuse material, obscene or indecent content, impersonation, false electronic records, or content linked to weapons, explosives or other illegal activities.
While introducing stricter compliance requirements, the government has also reassured intermediaries regarding safe harbour protections. The notification specifies that platforms will not lose protection under Section 79 of the IT Act for removing or restricting access to synthetic content including through automated tools provided they adhere to the prescribed rules.
With these amendments, India has significantly strengthened its regulatory framework around AI-generated and deepfake content, placing greater responsibility on social media platforms to ensure transparency, traceability and faster action against harmful online material.
Answer. Platforms must clearly label AI-generated or altered content, embed persistent metadata for traceability, and prevent tampering with these markers.
Answer. Social media platforms must act within three hours of government orders or user complaints, a sharp reduction from the earlier 36-hour window.
Answer. Users must declare if their uploads are AI-generated, while platforms must verify these declarations and face liability if they fail to enforce compliance.
https://www.mymobileindia.com/whatsapp-fact-checking-helpline-launch/
Highlights Samsung will host Galaxy Unpacked on February 25, 2026 in San Francisco at 10AM…
Highlights WhatsApp is testing customisation features on Android including 14 new app icons and 19…
Highlights All new and existing Discord users globally will have teen-appropriate settings enabled by default…
Highlights iPhones made up about 25% of all active smartphones worldwide in 2025. This dominance…
Highlights Snapdragon 8 Elite Gen 6 Pro may debut HPB (Heat Pass Block) cooling, enabling…
Highlights Google has confirmed the Pixel 10a India launch with pre-orders starting February 18 via…
This website uses cookies.