Microsoft’s cloud infrastructure is now at the center of a growing ethical controversy after reports revealed ICE nearly tripled its data storage on Azure servers between mid-2025 and early 2026, reaching 1,400 terabytes. The surge coincides with ICE’s expanded use of Microsoft’s AI tools for analyzing images, video, and text—raising concerns about automated surveillance capabilities.

The findings, published by investigative outlets, have reignited protests from No Azure for Apartheid, a worker-led group that has previously organized campaigns against Microsoft’s partnerships with the Israeli military. The group’s latest statement frames the company as a ‘digital arms dealer’, arguing that the same cloud and AI technologies enabling ICE’s operations are identical to those deployed in Gaza.

Cloud Storage and AI: A Growing Footprint

ICE’s reliance on Azure’s blob storage—a service designed for raw data archiving—has ballooned in recent months. While Microsoft maintains it lacks visibility into the specific data stored, the expansion aligns with ICE’s record funding and aggressive technology investments, including contracts with Amazon and Palantir. The agency, now the most heavily funded law enforcement body in the U.S., has spent tens of millions on surveillance tools, with Microsoft as a key vendor.

The company’s defense centers on its terms of service, which explicitly prohibit mass civilian surveillance. A spokesperson reiterated that Microsoft does not believe ICE is violating these restrictions, though the lack of transparency over data usage has fueled skepticism. The statement echoes Microsoft’s prior response to allegations involving the Israeli Ministry of Defense, where the company admitted to limited visibility into government cloud operations despite denying complicity.

<strong>Microsoft’s Azure Expansion with ICE Sparks Worker Backlash and Ethical Questions</strong>

Worker Solidarity and Broader Demands

Microsoft employees, including those behind ICEout.tech, have amplified the criticism, framing the issue as part of a larger pattern. The petition-driven campaign demands ICE’s withdrawal from U.S. cities and the termination of all contracts with the agency. Workers cite parallels between ICE’s domestic operations and Microsoft’s role in Gaza, where internal reports confirmed evidence supporting allegations of mass surveillance—prompting the company to block certain services for the Israeli military in 2025.

The backlash underscores a broader tension within tech companies over ethical boundaries. While Microsoft insists its policies prevent misuse, the lack of independent oversight and the scale of ICE’s data expansion have left workers and advocates questioning whether the company’s commitments are sufficient. The debate now extends beyond cloud contracts, touching on the dual-use nature of AI and surveillance technologies in an era of heightened immigration enforcement.

As Congress and courts grapple with legal frameworks for law enforcement technology, Microsoft’s stance—deferring to policy compliance while acknowledging operational blind spots—faces renewed scrutiny. The company’s ability to navigate these challenges may hinge on whether its ethical assurances can withstand mounting pressure from both internal and external stakeholders.