Microsoft Faces EU Legal Battle: Tech Giant Accused of Enabling Israeli Surveillance in Gaza 

The Irish Data Protection Commission has been asked to investigate Microsoft for allegedly facilitating unlawful data processing by the Israeli Defense Forces, following revelations that the military used Microsoft’s Azure cloud to store and analyze a massive trove of intercepted Palestinian phone calls as part of a mass surveillance operation.

The complaint, filed by the Irish Council for Civil Liberties, alleges this processing enabled serious human rights violations and that Microsoft may have breached EU data protection laws by allowing evidence to be moved from EU servers before an investigation could begin. While Microsoft launched an external review that led to terminating some of the unit’s cloud services, the case presents a landmark test for corporate accountability, questioning the ethical boundaries and legal responsibilities of tech giants when their commercial infrastructure is used in conflict zones and potentially for human rights abuses.

Microsoft Faces EU Legal Battle: Tech Giant Accused of Enabling Israeli Surveillance in Gaza 
Microsoft Faces EU Legal Battle: Tech Giant Accused of Enabling Israeli Surveillance in Gaza 

Microsoft Faces EU Legal Battle: Tech Giant Accused of Enabling Israeli Surveillance in Gaza 

The Unseen Infrastructure of Modern Surveillance 

In an unmarked data center somewhere in Europe, servers hum with activity, processing what was described internally as “a million calls an hour”—not the benign communications of willing customers, but the intercepted phone conversations of an entire population under occupation. This invisible architecture, built on Microsoft’s Azure cloud platform, forms the technological backbone of what human rights groups now allege is one of the most extensive surveillance operations in modern conflict. 

The Irish Data Protection Commission (DPC) is currently assessing a formal complaint that could redefine the boundaries of corporate responsibility in the digital age. The Irish Council for Civil Liberties (ICCL), representing Palestinian residents and EU citizens communicating with people in the Occupied Territories, has filed a groundbreaking complaint accusing Microsoft of unlawfully processing data that facilitates “war crimes, crimes against humanity, and genocide” by the Israeli military. 

At the heart of this controversy lies a fundamental question for the technology age: When cloud infrastructure becomes instrumental in warfare and surveillance, where does corporate responsibility end and complicity begin? 

The Architecture of Mass Surveillance 

The system in question originated from a 2021 meeting between Microsoft CEO Satya Nadella and Yossi Sariel, then commander of Israel’s Unit 8200—an intelligence unit comparable to the U.S. National Security Agency. Internal Microsoft documents record Sariel’s ambition to migrate up to 70% of the unit’s sensitive intelligence data to Azure’s cloud platform. 

What followed was the creation of a custom, segregated area within Azure that provided Unit 8200 with what one source called “near-limitless storage capacity.” By July 2025, approximately 11,500 terabytes of Israeli military data—equivalent to roughly 200 million hours of audio—resided on Microsoft servers in the Netherlands and Ireland, placing it under the jurisdiction of the EU’s strict data protection laws. 

The system represented a technological evolution in surveillance methodology. Whereas traditional intelligence operations typically targeted specific individuals, this cloud-based platform enabled what one intelligence officer described as “tracking everyone, all the time”. Unit 8200 sources told journalists that intelligence drawn from the call repositories had been used to research and identify bombing targets in Gaza, with officers examining calls made by people in the vicinity of potential targets before airstrikes. 

The Scale and Purpose of Data Collection 

  • Interception volume: System designed to capture approximately one million phone calls per hour 
  • Geographic scope: Targeted Palestinians in both Gaza and the West Bank 
  • Data types: Included phone calls, text messages, and voicemails 
  • Analytical capabilities: Allowed intelligence officers to “play back the content of cellular calls” 

The Allegations: From Data Protection to Human Rights 

The ICCL’s complaint makes several interconnected allegations that elevate the issue from a technical data protection matter to one of profound human rights implications: 

First, the council alleges that Microsoft’s processing of personal data directly facilitated lethal operations against civilians. According to the complaint, “Microsoft’s technology has put millions of Palestinians in danger” and “enabled real-world violence”. 

Second, the complaint asserts that Microsoft helped obscure evidence of potential wrongdoing. Whistleblowers provided internal screenshots showing that after The Guardian’s initial exposé on August 6, 2025, Microsoft increased storage quotas for Israeli defense officials, enabling them to transfer vast quantities of surveillance data from European servers to Israel before investigations could commence. 

Third, the ICCL contends that Microsoft’s Azure cloud system hosts critical components of Israel’s “Al Minasseq” system, described as “central to Israel’s control of Palestinian movement” and thus facilitating what the council terms “apartheid”. 

Microsoft has countered that “customers own their data” and that the transfer of data in August was the customer’s choice, actions that “in no way impeded our investigation”. 

Microsoft’s Response and Internal Reckoning 

Following The Guardian’s initial report in August 2025, Microsoft launched an external review of its relationship with Unit 8200. By September, the company announced it had “ceased and disabled a set of services to a unit within the Israel Ministry of Defense,” specifically cutting access to certain cloud storage and AI services. 

In a communication to employees, Microsoft President Brad Smith outlined the company’s position: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in every country around the world”. He acknowledged that Microsoft’s review had “found evidence that supports elements of The Guardian’s reporting,” including information about Azure storage consumption in the Netherlands and use of AI services. 

Key Aspects of Microsoft’s Position 

Microsoft’s Stated Principles Contradictions and Challenges 
Prohibition of mass surveillance technology Evidence of Azure storing millions of intercepted calls 
Respect for customer privacy and data ownership Alleged assistance in transferring data to obstruct investigations 
External review following media reports Whistleblower claims of internal awareness of surveillance purposes 
Continued cybersecurity work with Israel Termination of specific services to Unit 8200 

Despite these actions, critics note that Microsoft has maintained other contracts with the Israeli defense establishment. A representative for the activist group “No Azure for Apartheid” acknowledged Microsoft’s September decision as “significant and unprecedented” but noted that “Microsoft has only disabled a small subset of services to only one unit in the Israeli military”. 

The controversy has sparked internal dissent at Microsoft, with employees protesting the company’s military contracts. In May 2025, an employee disrupted a Nadella keynote speech, yelling: “How about you show how Israeli war crimes are powered by Azure?” 

The EU Regulatory Landscape and Potential Consequences 

The complaint lands on the desk of the Irish Data Protection Commission at a critical juncture for EU privacy law. Because Microsoft’s European headquarters are in Ireland, the DPC serves as the lead regulator for the company under the General Data Protection Regulation (GDPR). 

The GDPR, implemented in 2018, grants regulators unprecedented authority to hold corporations accountable for data protection failures. Notably, it allows for fines of up to 4% of a company’s annual global turnover—for Microsoft, which reported $281.7 billion in revenue in its most recent fiscal year, this could theoretically mean a penalty exceeding $11 billion. 

The ICCL has urged the DPC to consider “upper tier” fines due to “the scale and gravity of the infringements” and what it describes as “reckless disregard for fundamental rights”. 

Beyond financial penalties, a finding against Microsoft could establish important precedents regarding: 

  • The extraterritorial application of GDPR to surveillance activities affecting EU citizens 
  • Corporate liability when platforms are used for human rights violations 
  • The responsibility of cloud providers to monitor how their infrastructure is utilized 

The Broader Implications for Tech and Society 

This case represents a watershed moment in the ongoing reckoning over technology’s role in conflict and human rights abuses. Several interconnected trends converge in this controversy: 

The militarization of commercial technology: As cloud computing and AI capabilities advance, military and intelligence agencies increasingly turn to commercial providers rather than developing proprietary systems. The Unit 8200 project exemplifies this trend, with the Israeli military specifically seeking Azure’s capabilities because it “did not have sufficient storage space or computing power on the military’s servers”. 

The globalization of data storage: With data centers scattered across jurisdictions, the physical location of data becomes a strategic consideration. The fact that Palestinian surveillance data was stored in EU countries created the jurisdictional hook for GDPR to apply, despite the surveillance targeting non-EU citizens. 

Whistleblowing in the tech industry: The complaint relies significantly on evidence provided by Microsoft whistleblowers, reflecting growing internal dissent within tech companies over ethical applications of their technology. 

The “plausible deniability” problem: Microsoft maintains it had “no information” about the kind of data stored by Unit 8200 and was not “aware of the surveillance of civilians”. This raises difficult questions about how much due diligence technology providers must exercise regarding their clients’ use of services. 

Amnesty International has framed Microsoft’s partial termination of services as “a moment for corporate reckoning,” calling on all companies to “confront their participation in the global political economy sustaining Israel’s genocide against Palestinians”. 

A Defining Test for Digital Ethics 

As the Irish Data Protection Commission weighs this complaint, it confronts questions that extend far beyond Microsoft or the Israel-Palestine conflict. At stake is nothing less than the framework of accountability for a digital infrastructure that has become essential to modern life—and modern warfare. 

The sheer scale of the surveillance system—intercepting a million calls per hour, storing 200 million hours of audio—reveals how cloud technology has transformed the possibilities of mass monitoring. What was once technologically impractical has become routine, hidden behind the bland interface of corporate cloud services. 

Microsoft’s position illustrates the tension facing technology giants: simultaneously committed to privacy principles while serving government clients whose activities may conflict with those very principles. The company’s response—terminating some services while maintaining others—reflects the ambiguous middle ground many corporations seek when faced with ethical dilemmas involving powerful clients. 

The ultimate significance of this case may lie in its potential to establish that data protection laws apply not just to commercial privacy violations but to human rights contexts. By alleging that GDPR violations “facilitated war crimes, crimes against humanity, and genocide,” the complaint bridges the often-separate domains of data protection law and international human rights law. 

In an age where infrastructure is increasingly digital and battlespaces increasingly connected, the lines between technology provider and conflict participant grow ever more blurred. The Irish regulator’s decision will send signals throughout the global tech industry about the acceptable boundaries of business when clouds of data become clouds of conflict.