Corporate Structures in AI: OpenAI vs. Anthropic

Corporate Structures in AI: Open AI v. Anthropic

As artificial intelligence advances toward potentially transformative capabilities, the corporate structures of the companies building these systems have become a matter of public concern. How do you build an organization that can compete for top talent, attract billions in investment, and develop cutting-edge technology. While also maintaining a genuine commitment to safety and public benefit?

Two of the world’s leading AI labs, OpenAI and Anthropic, have taken markedly different approaches to this challenge. Both have adopted the Public Benefit Corporation (PBC) form, but they’ve layered on very different governance mechanisms. This post examines each structure in detail and then compares them on the dimensions that matter most: accountability, safety protections, and alignment with their stated missions.

OpenAI: The Nonprofit-Controlled PBC

Historical Context

OpenAI was founded in 2015 as a nonprofit research laboratory with an ambitious mission: to ensure that artificial general intelligence (AGI) benefits all of humanity. The nonprofit structure was deliberately chosen to insulate the organization from commercial pressures. However, by 2019, leadership recognized that the capital requirements for frontier AI development far exceeded what philanthropic donations could provide. This led to the creation of a “capped profit” subsidiary; an unusual hybrid that allowed investors to earn returns up to 100x their investment, while theoretically keeping the nonprofit board in control.

This structure proved awkward and limited fundraising. The tensions between OpenAI’s nonprofit mission and commercial imperatives came to a dramatic head in November 2023, when the nonprofit board briefly fired CEO Sam Altman, citing a loss of confidence. Altman was reinstated days later, but the episode exposed deep conflicts within the organization and raised questions about whether its governance structure was fit for purpose.

The 2025 Restructuring

On October 28, 2025, OpenAI completed a major restructuring that simplified its corporate architecture. The organization now consists of two entities:

The OpenAI Foundation: the renamed nonprofit, which retains the original founding mission. The Foundation now holds a 26% equity stake in the for-profit arm, valued at approximately $130 billion based on current valuations. This makes it one of the most well-resourced philanthropic organizations in history.

OpenAI Group PBC: Delaware Public Benefit Corporation that houses the commercial operations. Unlike a conventional corporation, a PBC is legally required to advance its stated mission and consider the broader interests of all stakeholders, not just shareholders. The mission stated in OpenAI Group’s charter is identical to the Foundation’s: ensuring AGI benefits all of humanity.

Crucially, the OpenAI Foundation retains governance control over the PBC through special voting rights. The Foundation’s board has the sole power to appoint all members of the OpenAI Group board and can remove directors at any time. For now, both boards share the same directors: Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Larry Summers, and CEO Sam Altman.

The stated intention is for the Foundation board to gradually diverge from the Group board over time. Within one year of the restructuring, a second Foundation director will transition to serving exclusively on the Foundation Board (in addition to Dr. Kolter, who already holds this position).

The Safety and Security Committee

The Safety and Security Committee (SSC) was established in 2024 to provide governance over OpenAI’s safety and security practices. Following the restructuring, the SSC remains a committee of the OpenAI Foundation, maintaining its role as an oversight body for all of OpenAI, including the for-profit Group.

The SSC is chaired by Dr. Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other members include Adam D’Angelo (Quora co-founder and CEO), retired U.S. Army General Paul M. Nakasone (former commander of U.S. Cyber Command), and Nicole Seligman (former EVP and General Counsel of Sony Corporation).

The committee’s powers are significant but not absolute. According to Dr. Kolter, the SSC has “the ability to do things like request delays of model releases until certain mitigations are met.” The SSC is briefed by company leadership on safety evaluations for major model releases and, along with the full board, exercises oversight over model launches. This includes the authority to delay a release until safety concerns are addressed.

Dr. Kolter occupies a unique position in the new structure. He serves on the Foundation board with full voting rights but sits only as a non-voting observer on the Group board. However, he has “full observation rights” to attend all for-profit board meetings and access information about AI safety decisions. His role was explicitly highlighted in the memoranda of understanding with both the California and Delaware Attorneys General as part of the regulatory approval process.

Anthropic: The Long-Term Benefit Trust Model

Founding Philosophy

Anthropic was founded in 2021 by former OpenAI executives, including siblings Dario and Daniela Amodei. The founders believed that AI might soon become immensely powerful and that the companies developing it had not yet been constrained by the laws and norms that govern other powerful technologies. Crucially, they also believed that the safety and social benefit of AI go hand in hand with commercial success. Anthropic could only be a leader on safety if it was also a leader in technical development and commercialization.

This perspective led to a fundamentally different design choice than OpenAI’s original nonprofit model. Rather than creating a nonprofit that controls a for-profit, Anthropic organized directly as a Delaware Public Benefit Corporation with a stated mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.”

The PBC Foundation

As a PBC, Anthropic’s board has the legal latitude to balance the financial interests of stockholders with the company’s public benefit purpose and the interests of those materially affected by the company’s conduct. This is a meaningful protection: shareholders would find it more difficult to successfully sue Anthropic’s board for prioritizing safety over profits.

However, Anthropic’s founders recognized that the PBC form alone was not sufficient. While it makes it legally permissible for directors to balance public interests with profit maximization, it does not make directors directly accountable to the public or align their incentives with public interests. The PBC structure gives the board “a flexibility, not a mandate,” as Anthropic’s general counsel Brian Israel has put it.

The Long-Term Benefit Trust

Anthropic’s main governance innovation is the Long-Term Benefit Trust (LTBT), an independent body designed to provide the accountability and incentives that the PBC form alone cannot supply.

The Trust is organized as a “purpose trust” under Delaware common law, unlike most trusts, which exist to benefit specific beneficiaries, the LTBT is managed to achieve a purpose: ensuring Anthropic balances stockholder interests with its public benefit mission. The Trust holds a special class of shares called Class T Common Stock, which grant the Trustees the power to elect an increasing number of Anthropic’s board members over time.

The phase-in schedule is tied to both time and fundraising milestones. Initially, the Trust could elect one of five directors. As Anthropic raised over $6 billion in funding, the Trust gained the authority to elect three directors, a majority of the board. This majority control was achieved by late 2024, well ahead of the May 2027 deadline originally specified.

The Trust currently has four members: 

• Neil Buddy Shah (Chair, CEO of the Clinton Health Access Initiative)

• Kanika Bahl (CEO of Evidence Action)

• Zach Robinson (Interim CEO of Effective Ventures US)

• Richard Fontaine (CEO of the Center for a New American Security, appointed in 2025 for his national security expertise).

Two original trustees have departed: Jason Matheny (former RAND Corporation CEO) stepped down in December 2023 to avoid potential conflicts of interest, and Paul Christiano (AI safety researcher) left in April 2024 to lead the U.S. AI Safety Institute.

Key Design Features

Several aspects of the LTBT’s design are worth highlighting:

Financial disinterest: Trustees have no financial stake in Anthropic. They are explicitly insulated from the company’s commercial success, allowing them to make decisions without profit-driven incentives.

Short terms: Trustees serve only one-year terms, ensuring frequent reevaluation by their peers. Future trustees are elected by the existing trustees, in a manner similar to hospital or nonprofit boards.

Advance notice rights: The Trust receives advance notice of certain actions that could significantly alter the corporation or its business, giving trustees the opportunity to intervene before consequential decisions are finalized.

Amendment safeguards: The Trust Agreement includes provisions allowing changes without trustee consent if sufficiently large supermajorities of stockholders agree. These thresholds increase over time, reflecting the accumulation of experience and the increasing stakes as AI technology grows more powerful.

Comparative Analysis

Philosophy of Control

The two structures embody fundamentally different theories of how to ensure AI development serves the public interest. Put simply: OpenAI places its safety guardian at the gate, reviewing what leaves the building. Anthropic places its guardian at the throne room door, choosing who sits inside.

OpenAI’s Safety and Security Committee functions as a catcher of bad products, an operational checkpoint that reviews specific model releases and can request delays until concerns are addressed. It’s reactive by design, intervening at the point where potentially problematic technology might ship to the world.

Anthropic’s Long-Term Benefit Trust functions as a kingmaker, it doesn’t review individual products at all. Instead, it selects the people who will make all the decisions, betting that mission-aligned trustees will choose mission-aligned directors who will, in turn, make sound choices across the board. It’s proactive in shaping who decides rather than what gets decided.

Beyond this distinction, OpenAI’s broader structure also places a nonprofit at the apex of the corporate hierarchy. The Foundation’s board has absolute authority to appoint and remove directors of the for-profit PBC. In theory, this means that if commercial pressures ever threatened to override safety concerns, the Foundation could simply replace the leadership. The nonprofit has no duty to maximize shareholder value; its only obligation is to its charitable mission.

Anthropic’s approach accepts the traditional corporate form and its accountability to shareholders, but layers on the LTBT to ensure that directors, while still owing fiduciary duties to stockholders, are elected by trustees who are explicitly insulated from financial interests and chosen for their expertise in AI safety, national security, and public benefit.

The practical implications of this difference are significant. OpenAI’s structure learned a hard lesson in November 2023: when the nonprofit board fired Sam Altman, employees threatened mass resignation and investors applied enormous pressure. The board backed down. This revealed that while the nonprofit has formal authority, exercising that authority against the wishes of employees and investors is extremely difficult.

Anthropic’s structure tries to avoid this dynamic by making board members accountable to shareholders from the start, while using the LTBT to ensure those directors are selected by people with mission-aligned incentives. The trustees have no company to lose, no employees threatening to quit, no investors to appease, at least in theory.

Safety Mechanisms

OpenAI has created a dedicated Safety and Security Committee with explicit authority to delay model releases. This is a concrete, operational mechanism: when a new model is ready for deployment, the SSC can say “not yet” until its concerns are addressed. Dr. Kolter’s unique position as a voting member of the Foundation board, non-voting observer of the Group board gives him visibility into both governance and commercial decision-making.

Anthropic has no equivalent dedicated safety committee with release authority. Instead, its Responsible Scaling Policy (RSP) creates a framework of “AI Safety Levels” (ASLs) that trigger specific security and safety requirements as models become more capable. The LTBT receives reports on RSP implementation, but its primary power is structural, electing directors rather than operational.

Both approaches have merit. OpenAI’s SSC provides a clear “pause button” for individual releases, but its effectiveness depends entirely on the committee’s willingness to use it and the board’s willingness to back it up. Anthropic’s approach builds safety considerations into the selection of leadership itself, betting that directors chosen by mission-focused trustees will make better decisions across the board.

Criticisms and Limitations

On OpenAI: Critics point out that with the Foundation and Group boards currently composed of nearly identical members, “having the power to fire yourself” doesn’t constitute meaningful oversight. Law professor Manuel Gómez has argued that the PBC form gives companies “a lot of breadth on when they decide to follow profit and when they decide to follow their nonprofit mission,” making it “a bit of an empty, unenforceable promise.” Public advocacy groups like Public Citizen have argued that the Foundation will now function primarily to advance the interests of OpenAI’s for-profit arm, inverting the original relationship.

On Anthropic: The LTBT currently has only four trustees (down from an intended five), has appointed only one board member despite having the authority to appoint three, and none of its current members have deep technical expertise in AI. Some observers question whether the trustees have the knowledge necessary to evaluate whether Anthropic is making sound safety decisions, as opposed to sound ethical or policy decisions. 

Perhaps the most fundamental criticism applies to both: neither structure provides a mechanism for the general public to enforce the companies’ stated missions. There’s no way for affected communities, civil society organizations, or individual citizens to sue either company for failing to live up to their public benefit obligations. Enforcement depends entirely on state Attorneys General and the internal dynamics of each company’s governance.

Conclusion

Both OpenAI and Anthropic deserve credit for attempting to solve one of the hardest problems in technology governance: how to build organizations that can develop transformative AI while remaining accountable to something beyond shareholder returns. Each has created structures that go far beyond what typical corporations offer.

OpenAI’s nonprofit-controlled PBC provides clear hierarchical authority and a dedicated safety committee with operational powers. Anthropic’s LTBT provides independence from financial incentives and a gradual transfer of board control to mission-focused trustees.

Yet both structures ultimately rely on the same fragile foundation: the integrity and judgment of a small group of people making decisions with enormous consequences. As the executive director at the AI policy insitute Daniel Colson has noted, governance of AI “is not something that any corporate governance structure is adequate for.” These are experiments in corporate law, not substitutes for democratic oversight and binding regulation.

The real test for both structures will come not in their design documents but in practice. When someone tries to make these companies do something their leadership or their mission-focused oversight bodies don’t want. We don’t yet know which of these experiments will prove more robust. What we do know is that these are some of the most serious attempts yet made to align corporate incentives with the public interest in a high-stakes domain. Whether they succeed may shape the future of AI governance for decades to come.