I never thought I’d be writing about artificial intelligence infrastructure becoming literal military targets, but here we are. Iran’s Islamic Revolutionary Guard Corps just released a video threatening to completely destroy OpenAI’s planned $30 billion data center in Abu Dhabi if the US attacks Iranian power plants. This isn’t some vague posturing. They showed satellite imagery, named executives (albeit incorrectly identifying Cisco’s Jeetu Patel as Microsoft’s Satya Nadella), and outlined specific targets.
The threat is tied to President Trump’s weekend escalation where he declared Tuesday would be “Power Plant Day, and Bridge Day” unless Iran opens the Strait of Hormuz. He went further on ABC News, stating the US plans on “blowing up the entire country” if no deal is reached. Iran’s Foreign Ministry responded predictably, but the IRGC’s specific targeting of tech infrastructure is what caught my attention.
This is the Stargate project we’re talking about. Not just OpenAI, but a $500 billion consortium including Oracle, Nvidia, Cisco, and SoftBank. The Abu Dhabi facility alone is supposed to deliver 16 gigawatts of compute power eventually, with 200 megawatts targeted for 2026. That’s an enormous amount of computational capacity concentrated in one geographically vulnerable location.
The New Attack Surface
As someone who’s spent years thinking about infrastructure resilience, this development is genuinely alarming. We’ve always known that data centers had physical security concerns. Fire suppression systems, cooling redundancy, power backup. Standard stuff. But explicit military targeting by a state actor? That’s a different threat model entirely.
The UAE seemed like a safe bet for massive AI infrastructure investments. Political stability, abundant energy resources for cooling and power, favorable tax treatment, geographic positioning between major markets. On paper, it made perfect sense. But “stable” and “outside conflict zones” are relative terms in the Middle East.
What really concerns me is the precedent this sets. If AI infrastructure becomes fair game in geopolitical conflicts, where do you even build these massive compute centers? The whole industry has been moving toward centralization because of the economics of scale. Training frontier models requires absurd amounts of coordinated compute. You can’t just distribute that easily across dozens of smaller facilities without significant performance penalties.
The Economics of Vulnerable Infrastructure
Let’s talk numbers for a second. OpenAI and its partners have already invested heavily in this Abu Dhabi facility. Construction is “well underway” according to their October update. That’s sunk capital that can’t easily be relocated or repurposed. If this threat materializes or even if it just remains credible, what does that do to insurance costs? To the willingness of other investors to fund similar projects in the region?
I’m genuinely curious how the financial models for these massive AI infrastructure projects account for geopolitical risk. Because it seems like we’ve entered a new era where your data center isn’t just competing on latency and power costs, it’s also being evaluated as a potential military target. That’s a risk premium nobody was seriously calculating two years ago.
The irony is that this vulnerability exists precisely because of how successful AI has become. These facilities matter enough to threaten. They’re strategic assets now, not just commercial infrastructure. The IRGC didn’t threaten some random cloud provider’s facility. They specifically called out the Stargate project because it represents American technological dominance in AI.
What This Means for the Industry
If you’re a developer or company relying on these massive centralized AI services, this should make you uncomfortable. Not because an attack is imminent or even likely, but because it exposes how fragile our AI infrastructure really is when geopolitics enters the picture. We’ve built an entire ecosystem assuming that compute will always be available, that APIs will respond, that the models will be there when we need them.
The industry’s response to this will be fascinating to watch. Do we see a push for more geographic distribution of AI infrastructure, even at the cost of efficiency? Do insurance and risk management considerations start driving architectural decisions? Does this accelerate on-premise and edge AI development because companies don’t want dependency on potentially vulnerable centralized systems?
I suspect we’ll see some combination of all three, but the timeline matters. Right now, the economics overwhelmingly favor massive centralized facilities. Changing that calculus requires either the threat becoming more concrete or the technology for distributed training improving dramatically. Neither is guaranteed.
The IRGC’s video is crude propaganda with obvious errors, but the underlying message is clear: AI infrastructure has become important enough to target, and the people building it need to think seriously about resilience in ways that go far beyond redundant power supplies and network links.