In a paper submitted directly to the Trump administration, OpenAI outlines a Cold Warrior exhortation to divide the world into camps.

June 3 2025 (TheIntercept.com)

OpenAI CEO Sam Altman speaks at the White House on Jan. 21, 2025, after donating $1 million to Donald Trump’s inauguration. Photo: Jim Watson/AFP via Getty Images
OPENAI HAS ALWAYS said it’s a different kind of Big Tech titan, founded not just to rack up a stratospheric valuation of $400 billion (and counting), but also to “ensure that artificial general intelligence benefits all of humanity.”
The meteoric machine-learning firm announced itself to the world in a December 2015 press release that lays out a vision of technology to benefit all people as people, not citizens. There are neither good guys nor adversaries. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,” the announcement stated with confidence. “Since our research is free from financial obligations, we can better focus on a positive human impact.”
Early rhetoric from the company and its CEO, Sam Altman, described advanced artificial intelligence as a harbinger of a globalist utopia, a technology that wouldn’t be walled off by national or corporate boundaries but enjoyed together by the species that birthed it. In an early interview with Altman and fellow OpenAI co-founder Elon Musk, Altman described a vision of artificial intelligence “freely owned by the world” in common. When Vanity Fair asked in a 2015 interview why the company hadn’t set out as a for-profit venture, Altman replied: “I think that the misaligned incentives there would be suboptimal to the world as a whole.”
Times have changed. And OpenAI wants the White House to think it has too.
In a March 13 white paper submitted directly to the Trump administration, OpenAI’s global affairs chief Chris Lehane pitched a near future of AI built for the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China. The policy paper’s mentions of freedom abound, but the proposal’s true byword is national security.
OpenAI never attempts to reconcile its full-throated support of American security with its claims to work for the whole planet, not a single country. After opening with a quotation from Trump’s own executive order on AI, the action plan proposes that the government create a direct line for the AI industry to reach the entire national security community, work with OpenAI “to develop custom models for national security,” and increase intelligence sharing between industry and spy agencies “to mitigate national security risks,” namely from China.
In the place of techno-globalism, OpenAI outlines a Cold Warrior exhortation to divide the world into camps. OpenAI will ally with those “countries who prefer to build AI on democratic rails,” and get them to commit to “deploy AI in line with democratic principles set out by the US government.”
The rhetoric seems pulled directly from the keyboard of an “America First” foreign policy hawk like Marco Rubio or Rep. Mike Gallagher, not a company whose website still endorses the goal of lifting up the whole world. The word “humanity,” in fact, never appears in the action plan.
Rather, the plan asks Trump, to whom Altman donated $1 million for his inauguration ceremony, to “ensure that American-led AI prevails over CCP-led AI” — the Chinese Communist Party — “securing both American leadership on AI and a brighter future for all Americans.”
It’s an inherently nationalist pitch: The concepts of “democratic values” and “democratic infrastructure” are both left largely undefined beyond their American-ness. What is democratic AI? American AI. What is American AI? The AI of freedom. And regulation of any kind, of course, “may hinder our economic competitiveness and undermine our national security,” Lehane writes, suggesting a total merging of corporate and national interests.
Related
Trump’s Big, Beautiful Handout to the AI Industry
In an emailed statement, OpenAI spokesperson Liz Bourgeois declined to explain the company’s nationalist pivot but defended its national security work.
“We believe working closely with the U.S. government is critical to advancing our mission of ensuring AGI benefits all of humanity,” Bourgeois wrote. “The U.S. is uniquely positioned to help shape global norms around safe, secure, and broadly beneficial AI development—rooted in democratic values and international collaboration.”
The Intercept is currently suing OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT.
OPENAI’S NEWFOUND PATRIOTISM is loud. But is it real?
In his 2015 interview with Musk, Altman spoke of artificial intelligence as a technology so special and so powerful that it ought to transcend national considerations. Pressed on OpenAI’s goal to share artificial intelligence technology globally rather than keeping it under domestic control, Altman provided an answer far more ambivalent than the company’s current day mega-patriotism: “If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?”
He also said, in the early days of OpenAI, that there may be limits to what his company might do for his country.
“I unabashedly love this country, which is the greatest country in the world,” Altman told the New Yorker in 2016. “But some things we will never do with the Department of Defense.” In the profile, he expressed ambivalence about overtures to OpenAI from then-Secretary of Defense Ashton Carter, who envisioned using the company’s tools for targeting purposes. At the time, this would have run afoul of the company’s own ethical guidelines, which for years stated explicitly that customers could not use its services for “military and warfare” purposes, writing off any Pentagon contracting entirely.
Related
OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”
In January 2024, The Intercept reported that OpenAI had deleted this military contracting ban from its policies without explanation or announcement. Asked about how the policy reversal might affect business with other countries in an interview with Bloomberg, OpenAI executive Anna Makanju said the company is “focused on United States national security agencies.” But insiders who spoke with The Intercept on conditions of anonymity suggested that the company’s turn to jingoism may come more from opportunism than patriotism. Though Altman has long been on the record as endorsing corporate support of the United States, under an administration where the personal favor of the president means far more than the will of lawmakers, parroting muscular foreign policy rhetoric is good for business.
One OpenAI source who spoke with The Intercept recalled concerned discussions about the possibility that the U.S. government would nationalize the company. They said that at times, this was discussed with the company’s head of national security partnerships, Katrina Mulligan. Mulligan joined the company in February 2024 after a career in the U.S. intelligence and military establishment, including leading the media and public policy response to Edward Snowden’s leaks while on the Obama National Security Council staff, working for the director of national intelligence, serving as a senior civilian overseeing Special Operations forces in the Pentagon, and working as chief of staff to the secretary of the Army.
This source speculated that fostering closeness with the government was one method of fending off the potential risk of nationalization.
As an independent research organization with ostensibly noble, global goals, OpenAI may have been less equipped to beat back regulatory intervention, a second former OpenAI employee suggested. What we see now, they said, is the company “transitioning from presenting themselves as a nonprofit with very altruistic, pro-humanity aims, to presenting themselves as an economic and military powerhouse that the government needs to support, shelter, and cut red tape on behalf of.”
The second source said they believed the national security rhetoric was indicative of OpenAI “sucking up to the administration,” not a genuinely held commitment by executives.
“In terms of how decisions were actually made, what seemed to be the deciding factor was basically how can OpenAI win the race rather than anything to do with either humanity or national security,” they added. “In today’s political environment, it’s a winning move with the administration to talk about America winning and national security and stuff like that. But you should not confuse that for the actual thing that’s driving decision-making internally.”
The person said that talk of preventing Chinese dominance over artificial intelligence likely reflects business, not political, anxieties. “I think that’s not their goal,” they said. “I think their goal is to maintain their own control over the most powerful stuff.”
“I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling like … ‘What’s going to happen to my country?’”
But even if its motivations are cynical, company sources told The Intercept that national security considerations still pervaded OpenAI. The first source recalled a member of OpenAI’s corporate security team regularly engaging with the U.S. intelligence community to safeguard the company’s ultra-valuable machine-learning models. The second recalled concern about the extent of the government’s relationship — and potential control over — OpenAI’s technology. A common fear among AI safety researchers is a future scenario in which artificial intelligence models begin autonomously designing newer versions, ad infinitum, leading human engineers to lose control.
“One reason why the military AI angle could be bad for safety is that you end up getting the same sort of thing with AIs designing successors designing successors, except that it’s happening in a military black project instead of in a somewhat more transparent corporation,” the second source said.
“Occasionally there’d be talk of, like, eventually the government will wake up, and there’ll be a nuclear power plant next to a data center next to a bunker, and we’ll all be moved into the bunker so that we can, like, beat China by managing an intelligence explosion,” they added. At a company that recruits top engineering talent internationally, the prospect of American dominance of a technology they believe could be cataclysmic was at times disquieting. “I remember I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling kind of sad about that and being like, ‘What’s going to happen to my country after the U.S. gets all the super intelligences?’”
MOST READ

ICE Official Reveals Miserable Conditions for U.S. Immigrants at Djibouti Prison

Trump Travel Ban Punishes Victims of the U.S. War Machine

Weapons Violations, Misconduct, and Whistleblower Retaliation at ICE
SINCERITY ASIDE, OpenAI has spent the past year training its corporate algorithm on flag-waving, defense lobbying, and a strident anticommunism that smacks more of the John Birch Society than the Whole Earth Catalog.
In his white paper, Lehane, a former press secretary for Vice President Al Gore and special counsel to President Bill Clinton, advocates not for a globalist techno-utopia in which artificial intelligence jointly benefits the world, but a benevolent jingoism in which freedom and prosperity is underwritten by the guarantee of American dominance. While the document notes fleetingly, in its very last line, the idea of “work toward AI that benefits everyone,” the pitch is not one of true global benefit, but of American prosperity that trickles down to its allies.
Related
Why an “AI Race” Between the U.S. and China Is a Terrible, Terrible Idea
The company proposes strict rules walling off parts of the world, namely China, from AI’s benefits, on the grounds that they are simply too dangerous to be trusted. OpenAI explicitly advocates for conceiving of the AI market not as an international one, but “the entire world less the PRC” — the People’s Republic of China — “and its few allies,” a line that quietly excludes over 1 billion people from the humanity the company says it wishes to benefit and millions who live under U.S.-allied authoritarian rule.
In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers. At the top: “Countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.” Given the earlier mention of building “AI in line with democratic principles set out by the US government,” this group’s membership is clear: the United States, and its friends.
In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers.
Beneath them are Tier 2 countries, a geopolitical purgatory defined only as those that have failed to sufficiently enforce American export control policies and protect American intellectual property from Tier 3: Communist China. “CCP-led China, along with a small cohort of countries aligned with the CCP, would represent its own category that is prohibited from accessing democratic AI systems,” the paper explains. To keep these barriers intact — while allowing for the chance that Tier 2 countries might someday graduate to the top — OpenAI suggests coordinating “global bans on CCP-aligned AI” and “prohibiting relationships” between other countries and China’s military or intelligence services.
One of the former OpenAI employees said concern about China at times circulated throughout the company. “Definitely concerns about espionage came up,” this source said, “including ‘Are particular people who work at the company spies or agents?’” At one point, they said, a colleague worried about a specific co-worker they’d learned was the child of a Chinese government official. The sourced recalled “some people being very upset about the implication” that the company had been infiltrated by foreigners, while others wanted an actual answer: “‘Is anyone who works at the company a spy or foreign agent?’”
THE COMPANY’S PUBLIC adoration of Western democracy is not without wrinkles. In early May, OpenAI announced an initiative to build data centers and customized ChatGPT bots with foreign governments, as part of its $500 billion “Project Stargate” AI infrastructure construction blitz.
“This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power,” the announcement read.
Unmentioned in that celebration of AI democracy is the fact that Project Stargate’s financial backers include the government of Abu Dhabi, an absolute monarchy. On May 23, Altman tweeted that it was “great to work with the UAE” on Stargate, describing co-investor and Emirati national security adviser Tahnoun bin Zayed Al Nahyan as a “great supporter of openai, a true believer in AGI, and a dear personal friend.” In 2019, Reuters revealed how a team of mercenary hackers working for Emirati intelligence under Tahnoun had illegally broken into the devices of targets around the world, including American citizens.
Asked how a close partnership with an authoritarian Emirati autocracy fit into its broader mission of spreading democratic values, OpenAI pointed to a recent op-ed in The Hill in which Lehane discusses the partnership.
“We’re working closely with American officials to ensure our international partnerships meet the highest standards of security and compliance,” Lehane writes, adding, “Authoritarian regimes would be excluded.”
CONTACT THE AUTHOR:
Sam Biddlesam.biddle@theintercept.com@sambiddle.99on Signal@sambiddle.comon Bluesky@samfbiddleon X

