Google, Meta & X’s Bold Plan to Tackle Deepfakes in India – 3 Key Policies Revealed!

Google, Meta & X’s Bold Plan to Tackle Deepfakes in India – 3 Key Policies Revealed!

As deepfake technology raises concerns, Google, Meta, and X have shared their strategies with the Indian government on tackling manipulated media. In response to growing threats, India’s Ministry of Electronics and Information Technology (MeitY) formed a nine-member committee in November 2024 to address these challenges. Google, which has had a deepfake policy since 2023, uses AI to detect harmful content and requires creators to label AI-generated media. It also allows individuals to report unauthorized use of their likeness. Meta introduced its AI labeling policy in April 2024, ensuring users disclose AI-generated content in posts and ads while working on better protections for celebrities.

X, however, stressed that not all AI content is deceptive and enforces its policies only on content deemed highly misleading or harmful. The discussions also focused on regulations to hold malicious actors accountable rather than stifling creative AI uses. Over the next three months, the MeitY committee will continue consultations, shaping India’s approach to deepfake regulations.

Google, Meta & X’s Bold Plan to Tackle Deepfakes in India – 3 Key Policies Revealed!
Google, Meta & X’s Bold Plan to Tackle Deepfakes in India – 3 Key Policies Revealed!

Google, Meta & X’s Bold Plan to Tackle Deepfakes in India – 3 Key Policies Revealed!

Deepfake technology is raising serious concerns worldwide, and India is no exception. To address this challenge, tech giants like Google, Meta (Facebook and Instagram’s parent company), and X (formerly Twitter) have outlined their strategies to the Indian government. Their goal is to curb the spread of AI-generated fake content while maintaining a balance between innovation and user safety.

 

India’s Push Against Deepfakes

In November 2024, India’s Ministry of Electronics and Information Technology (MeitY) set up a nine-member committee to tackle deepfake-related challenges. This move followed a directive from the Delhi High Court, urging the government to take action. On January 21, 2025, the committee met with representatives from major tech companies and legal experts to discuss regulations for AI-generated content, labeling standards, and ways to handle user complaints.

 

Google’s Strategy Against Deepfakes

Google has taken a proactive stance on deepfakes. Since November 2023, the company has implemented policies specifically targeting manipulated media. Here’s how Google is addressing the issue:

  • AI Detection & Removal: Google employs artificial intelligence to detect and remove harmful deepfake content.
  • Mandatory Disclosures: Content creators are required to label AI-generated videos, images, or audio to ensure transparency.
  • User Protection: If someone’s likeness is misused in a deepfake, they can report it, and Google will take it down if it violates its policies.

This approach helps prevent misinformation while allowing ethical AI-generated content, such as creative or educational material, to thrive.

 

Meta’s Policies on AI-Generated Content

Meta introduced its AI labeling rules in April 2024. The company requires users to disclose when they upload AI-generated content, including advertisements. Key aspects of Meta’s approach include:

  • Clear Labels: Users see warnings when they encounter digitally altered media.
  • Broad Coverage: While Meta’s policy doesn’t specifically target deepfakes, it covers all forms of manipulated content.
  • Protection for Public Figures: The company is enhancing safeguards to prevent unauthorized AI-generated impersonations of celebrities and politicians.

Meta’s primary focus is on transparency, ensuring users are aware when they are interacting with AI-created content.

 

X’s Stance on Synthetic Media

X (formerly Twitter) takes a slightly different approach. The company believes that not all AI-generated content is harmful. Its policy focuses on removing only the most deceptive and dangerous deepfakes. Here’s how X is handling the issue:

  • Targeted Removal: Only misleading or harmful AI-generated content is taken down.
  • Selective Labeling: X reserves labeling for cases where manipulated media poses a serious threat.
  • Balanced Approach: The platform aims to avoid unnecessary restrictions on creative or harmless AI content.

X’s strategy is more flexible, prioritizing high-risk deepfakes while allowing non-malicious AI-generated posts.

 

What’s Next for India’s Deepfake Regulations?

The MeitY committee will continue its discussions over the next three months, gathering input from tech companies, legal experts, and even victims of deepfake misuse. The focus areas include:

  • Stronger Accountability: Ensuring that those responsible for spreading harmful deepfakes are held accountable.
  • Clear Guidelines: Defining what constitutes deceptive AI content and establishing proper labeling standards.
  • User Protection: Creating efficient systems for victims to report and remove harmful deepfakes.

Rather than restricting AI-driven creativity, India aims to regulate its misuse while encouraging ethical applications of AI technology.

 

Conclusion

Deepfakes present a growing challenge, but tech companies and the Indian government are actively working to combat them. Google leverages AI detection and strict labeling, Meta enforces transparency through clear disclosures, and X focuses on removing only the most harmful content. As regulations evolve, the key challenge will be striking a balance between innovation and user safety—ensuring AI is used responsibly without stifling its positive potential.

India’s upcoming policies could serve as a model for other nations grappling with the deepfake dilemma. The next few months will be crucial in shaping how AI-generated content is managed in the digital space.

Leave a Reply