India’s Digital Crossroads: How Twin Amendments to IT Rules Reshape Free Speech and Online Governance
The recent twin amendments to India’s IT Rules, comprising the formalized Sahyog amendment for state-led content takedowns and the draft deepfake rules targeting synthetically generated information, collectively establish a dual architecture of censorship that threatens free speech and intermediary neutrality under the guise of accountability.
The Sahyog framework decentralizes removal powers to numerous officials, mandating takedowns within 36 hours while operating secretly without the procedural safeguards—such as user notification, hearing, or independent review—required by the Supreme Court in the Shreya Singhal judgment, thereby creating a parallel, opaque removal track that bypasses constitutional due process.
Simultaneously, the broadly defined deepfake rules impose unworkable labelling mandates and proactive monitoring duties on intermediaries, effectively eroding their safe harbor protections and incentivizing over-removal of legitimate expression. Together, these changes facilitate a two-tiered system of censorship, blending bureaucratic and algorithmic control, which operates with minimal transparency, undermines fundamental rights under Article 19(1)(a), and shifts India’s digital governance toward a model of stealth regulation that prioritizes control over accountable, rights-respecting oversight.

India’s Digital Crossroads: How Twin Amendments to IT Rules Reshape Free Speech and Online Governance
Introduction: A Pivotal Moment for India’s Digital Ecosystem
In a significant move that could reshape India’s digital landscape, the Government of India announced two pivotal amendments to the Information Technology (IT) Rules, 2021 on October 22, 2025. These changes arrive at a crucial juncture in India’s digital evolution, as the country grapples with balancing the regulation of online harms with the protection of fundamental rights. The first amendment, dubbed the “Sahyog amendment,” formalizes and expands the government’s content takedown capabilities through the Sahyog Portal.
The second introduces comprehensive regulations for AI-generated content, particularly targeting deepfakes. While the government frames these changes as necessary for accountability and transparency, a growing chorus of legal experts, digital rights activists, and civil society organizations warns that they collectively establish an architecture of control that could fundamentally alter speech dynamics and intermediary operations in the world’s largest democracy.
The timing and rollout of these amendments have raised eyebrows among policy watchers. The draft deepfake rules were released with considerable public attention, while the Sahyog amendment was quietly notified with minimal visibility. This sequencing, critics argue, pushed headlines toward “AI labelling” and “accountability,” allowing the more consequential expansion of state censorship powers to slip under the radar. This pattern exemplifies what some observers have termed India’s growing trend of digital regulation by stealth, where significant changes to digital governance are introduced with limited meaningful public consultation.
The Sahyog Amendment: Institutionalizing State-Led Content Removal
Understanding the Technical Framework
The Sahyog Amendment brings substantial changes to Rule 3(1)(d) of the IT Rules, 2021, creating what legal experts describe as a parallel content removal track that operates alongside the existing Section 69A of the IT Act. The amendment formally institutionalizes the Sahyog Portal, which has existed as an internal government tool for some time, allowing authorized officers to direct intermediaries to remove online content deemed unlawful. Under the revised framework, Joint Secretary–rank officials and DIG-level police officers are empowered to issue takedown orders to intermediaries, who must comply within 36 hours or risk losing their safe harbor protection under Section 79 of the IT Act .
While the government frames these changes as “streamlining” enforcement by limiting issuance to higher-ranking officials, the practical effect is more complex. Rather than reducing the state’s removal capacity, the amendment decentralizes content control by multiplying the number of authorities that can command removal—from one centralized body (MeitY) to potentially hundreds across India’s states and union territories . Every police range in states like Telangana and Kerala, each headed by a DIG, now represents a potential censorship authority .
Erosion of Procedural Safeguards
The most significant concerns surrounding the Sahyog Amendment relate to its erosion of procedural safeguards that have traditionally protected against arbitrary speech restrictions. Critics argue this new framework bypasses Section 69A’s stronger safeguards—which include written orders stating reasons for takedowns, an opportunity of hearing for the originator or intermediary, and review by an independent committee—replacing them with a weaker, largely internal process .
This system operates without three fundamental protections established in the Supreme Court’s landmark Shreya Singhal v. Union of India (2015) decision, which upheld Section 69A precisely because of these procedural safeguards . The Sahyog framework offers:
- No meaningful notice to users whose content is removed
- No hearing opportunity for affected parties
- No independent oversight, relying instead on internal monthly reviews by the Secretary of the requesting department
This arrangement creates what legal scholar Shivam Jadaun describes as “a system of administrative convenience taking the place of due process,” defeating the constitutional reasoning that enabled content-blocking authority to survive judicial scrutiny in the first place .
The Draft Deepfake Rules: Normalizing Overreach Through Technological Regulation
Broad Definitions and Problematic Obligations
On the same day as the Sahyog Amendment notification, MeitY released for public consultation the draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, introducing a new definition of “synthetically generated information” (SGI) and imposing fresh obligations on intermediaries . While the stated goal of combating deepfakes is undoubtedly legitimate, the drafting approach has raised concerns about potential overreach .
The draft defines SGI broadly as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true” . This expansive definition captures not only harmful deepfakes but also benign content like edited photos, AI-generated art, AI-assisted text, and even routine digital modifications. Such breadth creates significant classification risks, potentially subjecting legitimate expression to burdensome regulation .
The proposed rules mandate that intermediaries ensure SGI is “prominently labelled or embedded with a permanent unique label, metadata or identifier” . The technical requirements are notably prescriptive:
- Visual content must display identifiers covering at least 10% of the surface area
- Audio content must include audible identifiers for the first 10% of its duration
This uniform threshold across varying content formats raises practical concerns about user experience disruption and potential desensitization through notification fatigue .
Shifting Liability and Compliance Burdens
For Significant Social Media Intermediaries (SSMIs—platforms with over 5 million registered users in India), the obligations are even more extensive . They must:
- Require user declarations on whether uploaded content is synthetically generated
- Implement “reasonable and appropriate technical measures” to verify these declarations
- Ensure clear labelling of content identified as synthetically generated
The draft rules state that if an SSMI “becomes aware or is found to have knowingly permitted, promoted, or failed to act upon SGI in violation of these requirements,” it will be deemed to have failed due diligence . This vague standard, coupled with the threat of losing safe harbor protection, creates strong incentives for over-removal and could effectively impose general monitoring obligations—directly contradicting the Supreme Court’s position in Shreya Singhal that intermediaries gain “actual knowledge” only through court or government orders .
Constitutional Implications: Assessing the Impact on Fundamental Rights
Freedom of Speech and Expression
Collectively, these amendments raise significant concerns under Article 19(1)(a) of the Indian Constitution, which guarantees the right to freedom of speech and expression. While the Constitution permits reasonable restrictions under Article 19(2), these must be lawful, proportionate, and accompanied by due process . The current formulations appear problematic on multiple fronts:
The Sahyog framework’s opacity directly contradicts the Supreme Court’s ruling in Anuradha Bhasin v. Union of India (2020), which held that any restriction on speech and access should be “published, justified and reviewable” . By making takedown orders secret and providing no notice to affected users, the amendment undermines this essential transparency requirement.
Similarly, the draft deepfake rules’ vague definitions and compliance pressures could lead to disproportionate restrictions on legitimate expression. Journalists using AI tools for data visualization, artists creating digital art, or citizens using translation tools might find their content subject to burdensome labelling or removal—creating what legal experts term a “chilling effect” on protected speech .
Due Process and Natural Justice
Both amendments raise serious due process concerns. The right to hearing and review before state action—fundamental principles of natural justice embedded within Article 14 of the Constitution—are notably absent from the Sahyog framework . When users discover their content has been removed, they have no clear avenue to challenge the decision, learn the reasoning behind it, or confront their accusers.
This system of administrative censorship operates without the checks and balances that the Supreme Court has repeatedly emphasized as essential to constitutional governance. In PUCL v. Union of India (1996), the Court mandated procedural safeguards for telephonic surveillance, emphasizing that secret orders without scrutiny enable abuse. The same logic applies to censorship carried out non-transparently, which functions as a form of surveillance of speech .
Global Context: India’s Approach Versus International Standards
Comparing India’s new regulatory approach with international frameworks highlights several divergences. The European Union’s AI Act (2024), specifically Article 50, places disclosure obligations on users rather than intermediaries, exempts artistic content, and preserves safe harbor by avoiding proactive monitoring duties . This contrasts sharply with India’s approach, which imposes extensive verification and labelling obligations on platforms themselves.
Germany’s Network Enforcement Act (NetzDG), often cited as a comparative model, requires social media platforms to remove hate speech within 24 hours of notification but incorporates stringent checks and balances, including rights to appeal and independent regulatory oversight . India’s framework lacks equivalent safeguards, creating what critics describe as an accountability gap in the country’s digital governance architecture.
Broader Implications: The Combined Impact on Digital Ecosystem
The Two-Tiered Censorship Architecture
When viewed together, the Sahyog Amendment and draft deepfake rules establish what analysts term a “two-tiered system of censorship—one bureaucratic, one algorithmic—operating with minimal transparency and maximum discretion” . This structure significantly expands the perimeter of speech regulation by:
- Enabling the state to secure faster and broader takedowns through the Sahyog framework
- Compelling intermediaries to tighten their own content controls through fear of liability
The result is a digital ecosystem where speech restrictions can originate from both state and private actors, creating overlapping layers of control that are difficult to challenge or even track.
Impact on Intermediary Neutrality and Innovation
The amendments also threaten the foundational principle of intermediary neutrality—the idea that platforms should function as neutral conduits rather than active content moderators. By tethering safe harbor protection to instantaneous compliance with executive takedown requests and proactive detection of AI-generated content, the rules “effectively transform intermediaries from neutral facilitators of online expression into instruments of state control” .
This shift has particular implications for smaller intermediaries and innovation. Startups and smaller platforms may lack the resources to implement sophisticated AI detection systems or navigate complex compliance requirements, potentially cementing the dominance of established players and stifling competition in India’s digital market .
The Path Forward: Balancing Regulation and Rights
As India stands at this digital crossroads, several paths could help balance legitimate regulatory concerns with fundamental rights protection:
Legal Challenges and Judicial Review
The constitutional challenges to the IT Rules, 2021, currently being heard in the Delhi High Court, represent one potential avenue for course correction . The court has ordered issue-wise categorization of cases, with arguments proceeding separately for Part II and Part III of the Rules. The outcomes could significantly reshape the implementation of these amendments, particularly if courts emphasize the need for proportionality and procedural safeguards .
Constructive Policy Alternatives
Meaningful reform could include:
- Narrowing definitions of synthetically generated information to exclude benign content
- Introducing transparency reporting requirements for takedowns under the Sahyog framework
- Establishing independent oversight mechanisms rather than internal government reviews
- Developing risk-based approaches to labelling rather than one-size-fits-all mandates
- Ensuring meaningful notice and appeal mechanisms for affected users
Such measures would address genuine concerns about deepfakes and unlawful content while respecting constitutional safeguards and democratic values.
Conclusion: Safeguarding Digital Democracy
India’s twin amendments to the IT Rules represent a pivotal moment in the evolution of the country’s digital ecosystem. While the government’s stated goals of combating harmful content and ensuring platform accountability are legitimate, the current regulatory approach risks establishing a precedent of control without accountability and restriction without recourse.
The health of India’s democracy in the digital age depends on a careful balance—one that addresses genuine online harms without sacrificing the fundamental freedoms that form the bedrock of the Constitution. As the legal challenges proceed and these frameworks are implemented, all stakeholders—government, platforms, civil society, and citizens—have roles to play in ensuring that India’s digital future remains open, free, and consistent with the democratic values that the Constitution enshrines.
The ultimate test will be whether India can develop a digital governance model that effectively addresses emerging challenges like deepfakes and online harms while remaining faithful to its constitutional commitments to free speech, due process, and transparency. The current amendments, despite their stated aims of accountability, fall short of this balance—but the conversation around them represents an crucial opportunity to refine this approach in line with India’s democratic traditions.
You must be logged in to post a comment.