I love when someone digs through boring legal documents and finds something actually interesting. Simon Willison did exactly that by extracting OpenAI’s mission statements from their IRS 501(c)(3) tax filings and turning them into a git repository with fake commit dates. The result is a fascinating timeline that shows how the company’s priorities have shifted since 2016.
These aren’t just marketing copy. This is what they tell the IRS under penalty of losing their tax-exempt status. The words matter.
The Open Era That Never Really Was
The original 2016 mission had this idealistic promise: “We’re trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.”
By 2018, that entire sentence was gone. Just deleted. No replacement text, no softened version, nothing.
This was before the GPT-3 API became a closed product. Before the Microsoft partnership. Before they started keeping their research papers light on implementation details. The mission statement changed before the public narrative did, which tells you the decision was made internally well before anyone outside noticed.
When “Most Likely” Became Certain
In 2021, something subtle but telling happened. The phrasing went from artificial intelligence that “most likely” benefits humanity to AI that just “benefits humanity.” No hedging, no uncertainty.
That’s the kind of confidence that worries me. Building safe AI systems is genuinely hard. The qualifier “most likely” acknowledged that uncertainty. Removing it suggests either they solved problems that the rest of the field is still grappling with, or they stopped thinking those problems were worth acknowledging in their legal filings.
The same year, they stopped wanting to “help the world build safe AI technology” and pivoted to developing it themselves. The shift from facilitator to sole developer is enormous and nobody seemed to notice because it was buried in tax forms.
The Safety Disappearing Act
Here’s the kicker. In 2024, they reduced the entire mission statement to one sentence: “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.”
No mention of safety. No mention of being “unconstrained by a need to generate financial return.” Just a clean, corporate-friendly statement that could mean anything or nothing.
This happened after the whole Sam Altman firing and rehiring drama. After multiple safety team members left. After they dissolved the Superalignment team. The mission statement caught up with reality.
What This Means For The Rest Of Us
I’m not surprised that a company backed by billions in Microsoft funding eventually dropped the pretense of not caring about financial returns. What bugs me is how the safety language evaporated while they’re racing toward AGI.
Every other AI lab points to OpenAI as the reason they need to move fast. “We can’t let OpenAI get there first” is the justification for cutting corners everywhere. When the company that set the pace for the entire industry quietly removes safety from their legal mission statement, that’s not just their problem.
The tax filings don’t lie the way blog posts and press releases do. They’re legal documents with real consequences. Watching the evolution from “openly share our plans” to “ensure AGI benefits all of humanity” in eight years is watching a company realize that their original mission was incompatible with the amount of capital and computing power needed to train frontier models.
Maybe the 2016 mission was always naive, destined to collide with economic reality. But I wish they’d been more honest about that collision when it happened, instead of letting us discover it years later by diffing their tax returns.