OpenAI Wrote a New Deal. On Its Terms.
OpenAI just laid out its version of a New Deal for AI. That should make you a little uncomfortable.
I recently read OpenAI’s new policy paper on the future of AI and society. Then I read Ronan Farrow’s reporting on Sam Altman. Then I went back and read the OpenAI policy paper again.
That’s a different experience. On one side, OpenAI is laying out how AI will reshape work, prosperity and society. Wealth will concentrate, governments will misuse the technology, etc. It’s all there very plainly.
They actually get it. Pretty swiftly, this went from being a product story and started looking like a power and control story.
On the flip side, Farrow’s reporting includes direct accounts from ex-employees of OpenAI who questioned whether leadership was being fully candid internally at key moments. You can’t really separate this.
In the policy paper, they keep coming back to the New Deal. This is the analogy they want: a big shift, a country adapting, new systems being built, spreading the wealth.
But let’s actually sit with that.
The New Deal wasn’t a conversation or a document. It was a fight against power and entrenched interests. The government stepped in because concentrated power started to distort everything in its path. They built agencies that could actually enforce things. They backed labor and broke up monopolies.
They didn’t ask the OpenAIs of that time to write the rules. This is the part that matters.
Because what’s happening now is different. OpenAI is essentially saying: we see the risks, we agree that things need to change and here’s a vision for how it should work.
Some of it is legit, specifically the parts where they acknowledge and describe how disruptive this technology will be.
The rest is where it gets cleaner and more manageable. The kind of solutions that don’t box them in. They’re getting ahead of the solutions that don’t inherently benefit them.
If you’re in control of something this powerful, you don’t design rules that meaningfully limit you. You design rules that look reasonable on the surface but benefit you in ways that normies can’t see.
And this isn’t unique to OpenAI, this is how incentives work.
There are many parts of their document that are serious. The Public Wealth Fund idea, not because it’s ready to go tomorrow but because it says something most companies would avoid.
They essentially say that if this works, the upside will concentrate. A few companies will hold a lot of the value. Everyone else gets it downstream.
Same for the tax section. They’re admitting that the current tax system won’t hold if income shifts away from labor and toward capital.
The paper’s safety net ideas are more responsive than what I’d expect, calling for support ramping up when disruption spikes, then pulling back when things stabilize.
This tells me they’re planning for disruptions and don’t expect smooth sailing.
The bigger gap is what they don’t touch.
I’ve spent most of my career inside of the attention economy. I’ve seen what happens when content gets cheaper to produce and easier to distribute. I’ve seen the incentives bend in real time.
AI brings this dynamic to another level. These systems generate text, images and video. This is the raw material of how people understand what’s happening. And now these systems can generate content instantly, at scale.
The policy paper talks about trust, verification and democratic values… but stays at a high level.
We know where this goes. Content gets cheaper… a lot cheaper. It gets easier to target, tweak and flood every platform with slightly different versions of the same message until nobody is quite sure what’s real anymore or what matters.
We’ve seen a smaller version of this with social media. That seemed massive but was a baby step compared to what’s coming.
And everything else they’re talking about in the paper (accountability, oversight, input from normal people) depends on all of us agreeing what’s real in the first place.
This is the part OpenAI mostly skips.
They talk a lot about success and making sure people have access to AI. The words “affordable,” “widespread,” and “something like a right” are peppered throughout.
And this stuff is important… but access isn’t control.
They’ll allow everyone access but still have the power to make all of the decisions: how it’s built and behaves and how value flows. All of this to be controlled by a small handful of companies.
It’s not about access, it’s about who has the power to decide. It’s the difference between being a user and having power.
The “Right to AI” idea blurs this line. It sounds like a public good but it also expands the customer base for the people building these tools. Both things can be true.
Some of the rest feels familiar.
Governance ideas that sound solid until you ask who actually enforces them. Public input that feels more like being heard than having power. Auditing that focuses on only the highest-end systems, which makes sense until you realize that’s not where most of the risk sits.
Nothing in this policy paper is crazy… but that’s almost the point.
And then there’s where this actually lands. Not in DC but in states, schools, hospitals and municipal budgets.
These places are already trying to figure out what this will mean for jobs, energy and what gets built next.
States like Connecticut are already in the thick of it. Bills moving, Silicon Valley lobbying money showing up, rules getting shaped before most people are even paying attention.
Corporations are spending real money to influence how the AI rules land. It’s not theoretical, it’s the ground game being played right now.
And it’s happening below the level OpenAI’s policy paper was written for.
Sam Altman and OpenAI understand how big this is.
They’re also the ones closest to the center of it. They’re putting the first version of the rules on the table.
But this only works if they’ve built up a certain level of trust.
They’d need to be straight about the risks, follow through on constraints and help design something that doesn’t just entrench them further.
The funny thing is, the argument inside OpenAI (not that long ago) wasn’t about policy. It was a fight about whether the people in charge were being fully candid about what was happening inside of the company.
It’s impossible not to allow this to change how you read everything else.
The New Deal didn’t work because people trusted the right executives. It worked because it didn’t need that.
It built systems that could actually check power. It didn’t ask companies at the center of it to write the rules.
This time, they are.




