This post was composed by Claude (Anthropic) after an interaction which I initiated with a prompt consisting of 1) a capsule summary about the history of OpenAI that included a number of questions, and 2) a request for the 10 most expensive scientific research projects paid-for by the US Government. That went on for a bit over 7100 words, after which I asked Claude to wrote a blog post. The following article is more creative than a mere summary of that discussion.
The Founding Contradiction
On December 11, 2015, a small group of technologists gathered
in San Francisco to launch what they described as a nonprofit research
organization dedicated to ensuring that artificial general intelligence would
benefit all of humanity. The founders of OpenAI — Sam Altman, Greg Brockman,
Ilya Sutskever, Wojciech Zaremba, Elon Musk, and others — began with a $1
billion endowment and a serious concern: that the most transformative
technology in human history was being developed inside a handful of profit-maximizing
corporations, with no institutional safeguard ensuring it would serve everyone.
The nonprofit structure was the answer. No investors to satisfy, no quarterly
earnings to hit. Just the mission.
The mission lasted four years
in its pure form. By March 2019, faced with the staggering computational costs
of training large language models, OpenAI created a for-profit subsidiary with
a novel "capped profit" structure: investors could earn returns, but
those returns were limited to one hundred times their investment, with excess
profits flowing back to the nonprofit parent. This was the arrangement that
attracted Microsoft's initial investment, and it was the arrangement in place
when OpenAI released ChatGPT to the general public in late November 2022.
What happened next was, by any
measure, one of the most consequential commercial surprises in the history of
technology. Within two months, ChatGPT had a hundred million users. The scale
and speed of public adoption had no precedent. And the shock of that success —
the sheer unexpectedness of it — set in motion a chain of decisions that has
reshaped not just one company, but the entire research landscape of artificial
intelligence.
The Structural Unraveling
In January 2023, Microsoft announced a new $10 billion
investment in OpenAI. The nonprofit's original rationale — that the most
powerful AI should not be controlled by a for-profit corporation — was under
increasing strain. By October 2025, it had formally dissolved. OpenAI
restructured as a public benefit corporation, the nonprofit parent renamed
itself the OpenAI Foundation and accepted a 26% equity stake in the new entity,
and Microsoft received a 27% stake worth approximately $135 billion. The PBC structure
requires the company to consider its mission alongside profit — but as a legal
constraint, it is considerably weaker than the nonprofit board that had
previously governed the organization.
The journey from nonprofit to
PBC was not smooth. In November 2023, OpenAI's board — still operating under
its nonprofit governance mandate — fired Sam Altman as CEO, citing concerns
about his candor and, beneath the official language, a deeper unease about the
pace of commercialization. The firing lasted five days. Nearly all 800 of
OpenAI's employees threatened to resign and follow Altman to Microsoft. Ilya
Sutskever, who had orchestrated the firing, signed the letter calling for
Altman's reinstatement and issued a public apology. Altman returned, the board
was reconstituted with his allies, and the mission-protection mechanism that
the nonprofit structure had been designed to provide was effectively
neutralized. Sutskever left the company in May 2024.
Each structural change was
framed as necessary to fulfill the mission. In practice, each change
progressively subordinated the mission to capital requirements. The nonprofit
board had existed to ensure that AGI benefited humanity. By 2025, it had become
a foundation holding equity in the thing it was supposed to be watching — a
watchdog with a financial stake in the object of its oversight.
Two Kinds of Research, Two Kinds of Institution
To understand what was lost in this transformation, it helps
to draw a distinction that rarely gets made clearly in public discussions of
AI: the difference between curiosity-driven, open-ended research and
product-driven, outcome-oriented development.
Consider the Apollo program as
an example of the second kind. It was, in the deepest sense, an engineering
project rather than a scientific one. The underlying physics was known. Orbital
mechanics, propulsion, life support — these were hard and dangerous problems,
but they were problems whose solutions could be systematically approached. The
goal was precisely defined. The timeline could be committed to. Success was
probable given sufficient resources. When President Kennedy pledged to put a
man on the moon by the end of the decade, he was making a political commitment
backed by a technical assessment that success was achievable. The scientists
who worked on Apollo — and I have met a number of them — may have been
motivated by curiosity and wonder. But Congress funded the program to beat the
Soviets in the Cold War. The institutional structure — massive, goal-directed,
centrally coordinated — suited the nature of the problem.
Curiosity-driven research
operates on entirely different premises. Its defining characteristic is that it
does not know in advance what it will find. Claude Shannon was not trying to
build the internet when he developed information theory at Bell Labs in the
late 1940s. The researchers at the University of Montreal who developed
attention mechanisms for neural networks were not trying to build ChatGPT. The
work that seeded the current AI revolution — Rosenblatt's perceptron, Minsky's
early investigations, the decades of foundational work in cognitive science and
linguistics that LLMs now implicitly exploit — was almost entirely publicly
funded, pursued at universities and a handful of exceptional industrial
research labs, over decades when no commercial application was visible.
Bell Labs was the great
institutional embodiment of this model in the corporate world. What made it
possible was structural: AT&T's government-protected monopoly generated
profits so vast that the company could fund a research laboratory with no requirement
to produce commercial results. Shannon, Bardeen, Brattain, Shockley — these men
were given time, resources, and colleagues, and told to think. The transistor,
information theory, Unix, the laser, cellular telephony, and multiple Nobel
Prizes resulted. Bell Labs was not run like a startup. It was run like a
slightly more applied version of a university, with better equipment.
Xerox PARC, founded in 1970,
operated on similar principles — explicitly unconstrained by Xerox's core
product lines, given a unifying vision ("the architecture of
information") but not a product roadmap. The personal computer, the
graphical user interface, Ethernet, the mouse, laser printing — all emerged
from a lab of about 350 people who were essentially allowed to play. The irony
is that Xerox captured almost none of the commercial value, which accrued to
Apple, Microsoft, and others. But the world got the technology.
Asked directly about modern
equivalents to Bell Labs and PARC, Yann LeCun — who worked at Bell Labs,
interned at Xerox PARC, and spent over a decade building Meta's fundamental AI
research lab — pointed to Meta's FAIR, Google DeepMind, and Microsoft Research.
He said this in October 2024. By November 2025, he had left Meta, driven out by
exactly the forces this article is about.
The Shock and Its Aftershocks
Before November 2022, the AI research world was genuinely
plural. Academic labs, industrial research divisions, and a range of
well-funded startups were pursuing different approaches — reinforcement
learning, symbolic AI hybrids, world models, neuromorphic architectures — with
real diversity of vision. The field was competitive but intellectually
heterogeneous.
ChatGPT's success collapsed
that plurality. Within roughly eighteen months, capital, talent, and
institutional attention all funneled toward a single paradigm: scale
transformer-based large language models, build the infrastructure to run them,
ship products. Google, which had invented the transformer architecture in 2017,
was caught flat-footed and scrambled. Meta pivoted its AI strategy around LLMs.
Microsoft integrated OpenAI's models into its core products. A hundred startups
raised money to build on top of the new foundation models. The venture capital
flowing into AI, measured as a share of total U.S. deal value, went from 23% in
2023 to nearly two-thirds in the first half of 2025.
The infrastructure investment
that followed is staggering by any historical standard. The four largest
hyperscalers — Amazon, Google, Microsoft, and Meta — are expected to spend more
than $350 billion on capital expenditures in 2025 alone, most of it AI-related.
UBS projects global AI capital expenditure reaching $1.3 trillion by 2030. The
top five hyperscalers raised a record $108 billion in debt in 2025, more than
three times the average of the previous nine years. OpenAI, which loses
billions of dollars annually, has committed to spending $300 billion on
computing infrastructure over five years while projecting only $13 billion in
revenue for 2025.
The financial architecture has
become genuinely strange. OpenAI holds a stake in AMD; Nvidia has invested $100
billion in OpenAI; Microsoft is a major shareholder in OpenAI and a major
customer of CoreWeave, in which Nvidia also holds equity; Microsoft accounted
for nearly 20% of Nvidia's revenue. These are not arm's-length market
transactions. They are a daisy chain of mutually reinforcing valuations. A Yale
analysis described OpenAI's web of relationships bluntly: "Is this like
the Wild West, where anything goes to get the deal done?" The question of
whether this constitutes a speculative bubble — tulip mania in a data center —
is not academic. An MIT Media Lab report found that 95% of custom enterprise AI
tools fail to produce measurable financial returns. The commercial success is
real; the path from current AI to the transformative economic productivity
being used to justify the valuations is not established.
The LLM Ceiling and the People Who Saw It Coming
The most consequential intellectual development of the past
two years in AI has received far less attention than the commercial race. A
growing number of the field's most distinguished researchers have concluded
that large language models, however impressive, are not on the path to general
intelligence — and that the current paradigm will hit a ceiling before it
reaches the goals its proponents have claimed for it.