OpenAI has publicly backed the creation of an international oversight organization for artificial intelligence, proposing a framework modeled after the International Atomic Energy Agency (IAEA). The announcement came just hours before President Donald Trump arrived in Beijing for a high-stakes summit with Chinese President Xi Jinping on May 14, 2026.
Chris Lehane, OpenAI’s Vice President of Global Affairs, stated that the United States should leverage its current leadership in AI technology to establish a global governance mechanism that produces safer, more resilient systems. The proposed body would include China as a member, according to Bloomberg.
This is not a brand-new idea from OpenAI. The company has been advocating for IAEA-style AI oversight since 2023, when CEO Sam Altman first floated the concept during congressional testimony. But the timing of this latest push is far more significant, arriving at a moment when AI governance is no longer theoretical.

Why OpenAI Is Pushing for AI Governance Now
The proposal did not appear in a vacuum. Several major developments have converged in 2026 that make this announcement strategically critical for OpenAI:
- The Trump-Xi summit in Beijing (May 14-15) is the first U.S. presidential state visit to China since 2017. AI safety and competition are confirmed agenda items, with White House officials signaling interest in establishing a formal communication channel between the two countries on AI matters.
- The Musk vs. OpenAI trial, now in its third week in Oakland, California, has put OpenAI’s governance and corporate structure under intense public scrutiny. Sam Altman testified that the company’s nonprofit should not be controlled by any single individual.
- Microsoft’s $100 billion investment in the OpenAI partnership was revealed in court testimony this week, underscoring the enormous financial stakes behind AI development.
- OpenAI’s potential IPO, which could value the company at approximately $1 trillion, is now entangled with governance questions. Reports indicate CFO Sarah Friar has privately suggested delaying the offering to 2027. The IPO debate has also intensified after reports of a major employee share sale, fueling fresh speculation around OpenAI’s public-market timeline.Â
In short, OpenAI’s governance push serves both a policy goal and a business strategy. By positioning itself as a responsible actor advocating for oversight, the company strengthens its narrative ahead of going public while also shaping the regulatory environment in its favor.
What the Proposed AI Governance Body Would Look Like
Lehane’s proposal centers on connecting the U.S. Commerce Department’s Center for AI Standards and Innovation (CAISI) with international AI safety institutes. CAISI, housed within the National Institute of Standards and Technology (NIST), already evaluates frontier AI models before public deployment.
Key elements of what OpenAI envisions:
- IAEA-style structure: Just as the IAEA monitors nuclear energy development and enforces non-proliferation standards, an AI governance body would track computing power usage, set deployment standards, and verify compliance across borders.
- U.S. leadership with Chinese participation: OpenAI wants the U.S. to lead the governance framework, with China included as a member rather than excluded entirely.
- CAISI as the foundation: OpenAI and Anthropic were the first companies to sign voluntary testing agreements with CAISI back in 2024. Google DeepMind, Microsoft, and xAI signed similar agreements in May 2026, according to The Hill.
- Liability protections for participating companies: OpenAI has previously called for Congress to provide companies that partner with CAISI with preemption of state-level AI regulations.
This framework would effectively give participating AI companies a seat at the governance table while potentially shielding them from the patchwork of state-level regulations that have been proliferating across the U.S.
The U.S.-China AI Race: Context Behind the Proposal
The backdrop of this governance push is the intensifying AI competition between Washington and Beijing.
The performance gap between American and Chinese AI models has narrowed dramatically. Researchers at Stanford noted in their 2026 AI Index Report that the two countries are now essentially neck-and-neck in model capabilities. Chinese models like DeepSeek’s R1 and Moonshot’s Kimi K2 have demonstrated performance on par with systems from OpenAI and Google, often developed at a fraction of the cost.
China has not been passive on governance either. At the World Artificial Intelligence Conference (WAIC) in Shanghai in July 2025, Chinese Premier Li Qiang proposed the creation of a World Artificial Intelligence Cooperation Organization (WAICO). The proposal included a 13-point plan submitted to the United Nations with two new AI dialogue mechanisms.
The numbers illustrate the scale of the competition:
- U.S. private AI investment reached $109.1 billion in 2024
- China’s AI core industry was valued at $84 billion (600 billion yuan) as of early 2025
- Global hyperscaler capital expenditure is estimated at $527 billion for 2026
- The EU’s entire AI Act enforcement budget is €1 billion, a fraction of what a single frontier model training run now costs
This asymmetry explains why both sides see governance as a strategic lever rather than purely a safety mechanism.

What the Trump-Xi Summit Means for AI Policy
Trump’s Beijing visit, scheduled for May 14-15, places AI governance in the context of broader geopolitical negotiations that include trade, the Iran conflict, Taiwan, and technology export controls.
White House officials have indicated that the summit could result in the establishment of a formal U.S.-China AI communication channel. However, experts remain skeptical about China’s willingness to engage in meaningful safety commitments.
Analysts at the Council on Foreign Relations have warned that China’s interest in AI safety talks is driven more by a desire to gain technology access than by genuine safety commitments. A telling example: when the two countries held their only AI safety dialogue in 2024, the American side focused on shared technical risks while the Chinese delegation used the meeting to raise objections about chip export restrictions.
The delegation traveling with Trump includes CEOs from Apple, Tesla, Nvidia, Meta, Goldman Sachs, BlackRock, and other major corporations. Nvidia CEO Jensen Huang’s presence is particularly notable, given that the company’s chip sales to China remain one of the most contentious bilateral issues.
OpenAI’s Broader Strategic Moves in 2026
The governance proposal fits within a pattern of aggressive strategic positioning by OpenAI throughout 2026:
- GPT-5.5 and GPT-5.5-Cyber launch: The company released its latest model and a cybersecurity-specific variant for vetted security teams, directly competing with Anthropic’s Mythos model.
- ChatGPT advertising: OpenAI began testing ads within ChatGPT and launched an Ads Manager platform with CPC bidding, opening a new revenue stream.
- OpenAI Deployment Company: A new consulting arm backed by a $4 billion acquisition of Tomoro, signaling a move into enterprise AI services.
- EU engagement: OpenAI offered European institutions access to GPT-5.5-Cyber while Anthropic has been slower to grant EU access to Mythos, giving OpenAI a diplomatic edge in regulatory discussions.
- Revenue-sharing cap with Microsoft: OpenAI and Microsoft agreed to cap revenue-sharing payments at $38 billion, potentially saving OpenAI $97 billion through 2030.
Each of these moves reflects a company preparing for the scrutiny that comes with being public while simultaneously trying to shape the regulatory environment.
What Happens Next
The immediate question is whether the Trump-Xi summit produces any concrete framework for U.S.-China AI cooperation. If a formal communication channel is established, it would represent the first structured bilateral dialogue on AI since the Biden administration.
For OpenAI specifically, the governance proposal is likely to face skepticism from multiple directions:
- Decentralized AI advocates will view centralized governance as a threat to open-source development and permissionless innovation.
- Smaller AI companies may see IAEA-style regulation as a moat that benefits incumbents with the resources to comply.
- Chinese policymakers have historically used governance discussions to push for technology access rather than accept external oversight.
- U.S. lawmakers remain divided between those who want federal AI regulation and those who view any regulation as an obstacle to maintaining technological supremacy.
The outcome of the Trump-Xi summit, combined with the Musk trial verdict expected next week, will likely determine whether this governance proposal gains real traction or remains aspirational. In the meantime, OpenAI continues to position itself not just as an AI company, but as the company that wants to write the rules.
FAQs
What is the global AI governance body OpenAI supports?
OpenAI is advocating for an international oversight organization modeled after the International Atomic Energy Agency (IAEA). This body would set safety standards for AI development, track computing resources dedicated to AI research, and verify that participating nations and companies comply with agreed-upon guidelines.
Why is OpenAI proposing this now?
The timing coincides with the Trump-Xi summit in Beijing, the ongoing Musk trial questioning OpenAI’s governance structure, and the company’s preparation for a potential IPO. These converging events make governance positioning both a policy priority and a business necessity.
What is CAISI and how does it relate to this proposal?
The Center for AI Standards and Innovation (CAISI) is a division of the U.S. Commerce Department that evaluates frontier AI models before deployment. OpenAI envisions CAISI as the foundation for a broader international governance network connecting AI safety institutes across multiple countries.
How does this affect the AI industry?
If implemented, an IAEA-style governance body could establish mandatory testing and compliance standards for the most powerful AI systems. This would likely benefit large incumbents like OpenAI that already participate in voluntary testing programs while creating new barriers for smaller competitors and open-source projects.
The post OpenAI Calls for Global AI Governance Body Led by the U.S. and China appeared first on Memeburn.






