Highlights
- India’s IT Minister Ashwini Vaishnaw urged platforms like YouTube, Meta, X, and Netflix to comply with India’s constitutional framework under new rules.
- Social media companies must now remove unlawful content within 3 hours of notification, a sharp reduction from the earlier 36-hour window.
- European regulators also intensified investigations into harmful AI-generated content on major platforms.

The Indian government has issued a clear message to global technology platforms, urging them to operate within the country’s constitutional boundaries following stricter content moderation rules.
Union Minister of Electronics and Information Technology Ashwini Vaishnaw said on Tuesday that major digital platforms such as YouTube, Meta, X, and Netflix must function in alignment with India’s constitutional framework.
“It’s very important for the multinationals to understand the cultural context of the country in which they are operating,” the minister said during a briefing at the India AI Impact Summit.
His remarks came on the sidelines of the India AI Impact Summit in Delhi, where leading executives from global artificial intelligence companies are joining world leaders for discussions this week.
Vaishnaw also highlighted the growing concerns around deepfakes, stating that stronger regulatory measures are needed to address the issue. He confirmed that discussions with industry stakeholders have already begun to explore potential solutions.
India’s Stricter Content Takedown Timeline
The minister’s comments follow the government’s move last week to significantly tighten content removal timelines. Under the updated rules, social media companies are now required to remove unlawful content within three hours of receiving notification. Previously, platforms had up to 36 hours to act.
The new mandate could present compliance challenges for companies such as Meta, YouTube and X, as they adjust their moderation systems to meet the shortened deadline.
The push for tougher oversight comes amid rising global scrutiny of social media companies and their content governance practices. Governments worldwide are pressing platforms to take quicker action against harmful and illegal material, demanding greater transparency and accountability.
Authorities in Spain recently directed prosecutors to investigate X, Meta, and TikTok over allegations that AI-generated child sexual abuse material was being circulated on their platforms. The move reflects intensified scrutiny by European regulators targeting large technology firms over the spread of harmful and unlawful content.
FAQs
Q1. What message did Ashwini Vaishnaw give to global tech platforms?
Answer. He urged platforms like YouTube, Meta, X, and Netflix to operate within India’s constitutional framework and respect the country’s cultural context.
Q2. What is the new timeline for removing unlawful content in India?
Answer. Social media companies must now take down unlawful content within 3 hours of notification, compared to the previous 36-hour deadline.
Q3. Why is India tightening content moderation rules?
Answer. The move addresses rising concerns over deepfakes and harmful content, aligning with global scrutiny as regulators worldwide demand faster, more transparent action.
