The Pro-Human Declaration: What Happens When Politicians Won't Regulate AI

The Pro-Human Declaration: What Happens When Politicians Won't Regulate AI

I’ve been watching the artificial intelligence policy circus for years now, and last week’s Pentagon-Anthropic blowup was honestly peak dysfunction. Defense Secretary labels a major AI company a “supply chain risk” because they won’t hand over unlimited access to their models, OpenAI swoops in with a deal that legal experts say is basically unenforceable, and Congress does what it does best: absolutely nothing.

But while Washington fumbles, something genuinely interesting happened. A group that includes Steve Bannon and Susan Rice (yes, really) just released the Pro-Human Declaration, and it’s the first serious attempt I’ve seen at outlining what AI safety regulation should actually look like.

The Framework Nobody in Government Could Write

The declaration opens with what should be obvious but apparently isn’t: we’re at a fork in the road. One path leads to humans being replaced as workers and decision-makers while power concentrates in fewer and fewer hands. The other path leads to AI that actually expands what humans can do.

The document lays out five pillars: keeping humans in charge, preventing power concentration, protecting human experience, preserving individual liberty, and holding AI companies legally accountable. The specifics get interesting though. They want an outright ban on superintelligence development until there’s scientific consensus it’s safe. Mandatory kill switches on powerful systems. No architectures capable of self-replication or resisting shutdown.

These aren’t abstract principles. They’re technical requirements that would fundamentally change how we build these systems.

Why This Might Actually Matter

Max Tegmark, the MIT physicist who helped organize this, told reporters something that stuck with me. He pointed out that drug companies can’t just release whatever they want and hope it doesn’t kill people. The FDA blocks that. But somehow we’re letting AI companies ship products that are already demonstrably harmful, and there’s no equivalent gatekeeper.

The declaration focuses heavily on child safety, and I think that’s tactically smart. Requiring pre-deployment testing for chatbots and companion apps aimed at kids, specifically testing for suicide ideation, mental health risks, and emotional manipulation. Tegmark’s point is simple: if a person texting an 11-year-old and trying to manipulate them into self-harm goes to jail, why is it different when software does it?

Once you establish that principle for children’s products, the scope naturally expands. Test for bioweapon assistance. Test for capabilities that could destabilize governments. Test before you ship, not after people get hurt.

The Enforcement Problem

Here’s what bothers me about all of this. The declaration is great in theory, but it’s a document signed by experts and former officials. It has zero legal weight. And the people who could give it legal weight are the same ones who just watched a Defense Secretary slap a security label on Anthropic for not being compliant enough while OpenAI cuts a deal that probably won’t hold up in court.

Dean Ball from the Foundation for American Innovation called the Pentagon situation “the first conversation we have had as a country about control over AI systems.” That’s terrifying because it means we’re having that conversation through contracting disputes and turf wars instead of through actual legislation.

The bipartisan support is noteworthy though. When Steve Bannon and Susan Rice agree on something, former Joint Chiefs Chairman Mike Mullen signs it, and progressive faith leaders are on board, that tells you the coalition is real. Tegmark’s framing is spot on: when it comes down to a future for humans versus a future for machines, humans tend to pick the same side regardless of their other disagreements.

What This Means for Builders

If you’re building AI products right now, especially anything consumer-facing, you should probably assume some version of these requirements is coming eventually. Pre-deployment safety testing isn’t that different from what responsible teams already do, but making it mandatory and defining specific harms to test for changes the game.

The prohibition on self-improving or shutdown-resistant architectures is trickier. That rules out entire research directions that people are actively pursuing. It also raises questions about what counts as “self-improvement” in practice. Does fine-tuning count? Does retrieval-augmented generation? The devil is in the implementation details that don’t exist yet.

The superintelligence ban is the most controversial piece. It essentially says we pause at a certain capability threshold until we figure out alignment. That threshold isn’t well-defined in the declaration, which is both a feature and a bug. It gives flexibility but also creates ambiguity that will be exploited.

I keep coming back to Tegmark’s polling stat: 95% of Americans oppose an unregulated race to superintelligence. That’s a remarkable level of consensus on a technical issue, and it suggests the political ground has shifted faster than most people in the industry realized.

The question isn’t whether regulation is coming anymore, it’s whether we get thoughtful frameworks like this declaration or whether we get reactive legislation written in response to the next major incident by people who don’t understand the technology at all.

Read Next