OpenAI Calls Fair Use A Matter of National Security In New Filing
Haunted by specter of DeepSeek and a Red under the AGI bed
Having spent many years in Washington interviewing corporate lobbyists and trade association executives, enduring hundreds of congressional committee hearings, and plowing through thousands of pages of briefs and comments to agency rulemakings, I have developed something of a connoisseur’s palate for the puffery and tortured logic that characterize much of public policy advocacy. But I am not sure I have ever encountered a concoction as over done as what OpenAI served up last week in comments to the White House Office of Science and Technology Policy on the Trump Administration’s AI Action Plan.
Even by the hyperbolic standards of the genre, the medley of techno-utopianism, special pleading, Red-baiting, and flop sweat the ChatGPT-maker packed into 15 pages stands out.
It’s a truly strange document. And, perhaps unintentionally, a revealing one as well. For all its grandiose pronouncement like, “as we approach [artificial general intelligence], innovation is poised to scale human ingenuity itself,” the filing exposes OpenAI as haunted by the specter of DeepSeek and what the release of the Chinese-developed app could mean for the future development of AI systems.
The missive dedicates two of its 15 pages to outlining the alleged advantages of an authoritarian system like that of the Peoples Republic of China over nominally democratic systems such as the U.S. for rapidly advancing strategic technologies. And it calls on the U.S. government to emulate the Chinese to ensure U.S. supremacy in AI.
Today, hundreds of billions of dollars in global funds are waiting to be invested in AI infrastructure. If the US doesn't move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the [Chinese Communist Party].
Yowza.
But the real threat DeepSeek poses, at least to OpenAI, is not the diversion of capital, or even Chinese communism, but the demonstrated ability to create a highly capable AI system using far less capital and far fewer resources than OpenAI uses.
OpenAI, like its peers, has sunk hundreds of billions of dollars of its investors’ money into accumulating ever-more computing capacity to process ever-more data to train ever-larger models. In contrast, DeepSeek’s developers claim to have trained the model for a mere $5 million, and in relatively short order, while achieving near-GPT-4 level capability.
That $5 million figure is almost certainly an understatement of the full cost, perhaps by a few orders of magnitude. It likely represents only the final stage of development, after greater investment had gone into R&D, data preparation, and other steps. But there is little doubt that DeepSeek achieved what it did using capital and resources far more efficiently than OpenAI or any of the other U.S.-based “hyperscalers.”
Yet OpenAI’s only response to the challenge posed by what DeepSeek represents is to propose sinking hundreds of billions more to build still more data centers to process still more data.
We support the solutions already proposed by this Administration to ensure that sufficient capital flows to building AI infrastructure in the US:
Investment vehicles like a Sovereign Wealth Fund.
Government offtake and guarantees that both provide the government with the compute it needs and signal to markets that the demand will be there for American-developed AI.
Tax credits, loans, and other vehicles the US government can direct to provide credit enhancement.
There are increasing signs, moreover, that scaling has reached its practical limit and that the incremental improvements it can achieve do not and cannot justify the cost, as Gary Marcus and many others who know the technology better than I do have been arguing for some time.
OpenAI spent more than 18 months developing what was supposed to be GPT-5, at 10X the cost of GPT-4. Yet the results fell so far short of what the company promised that it was forced to rebrand the release as GPT-4.5 to save face.
Worse, neither OpenAI nor any of its peers have figured out a viable business model for generative AI that would yield a return on all that investment, and investors are getting nervous. No wonder it’s calling for the federal government to provide the funds.
Some of OpenAI’s other proposals are just bizarre, like asking the administration simply to declare the use of copyrighted works to train AI models presumptively “fair use.”
American copyright law, including the longstanding fair use doctrine, protects the transformative uses of existing works, ensuring that innovators have a balanced and predictable framework for experimentation and entrepreneurship. This approach has underpinned American success through earlier phases of technological progress and is even more critical to continued American leadership on AI in the wake of recent events in the PRC… America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development. In other markets, rigid copyright rules are repressing innovation and investment… Applying the fair use doctrine to AI is not only a matter of American competitiveness —it’s a matter of national security.
Well, then. Seems like an easy call.
Except that neither the president nor any executive-branch agency has any role in making copyright policy. The Constitution is unambiguous, as the Supreme Court has repeatedly emphasized, that copyright policy is the sole province of Congress, not the executive branch. “Fair use,” moreover, is a post hoc, fact-based analysis conducted by courts. There is no real precedent for preemptively declaring an entire use case for copyrighted works to be fair use apart from the handful of enumerated cases in § 107 of the Copyright Act, and certainly not for the White House to do so.
OpenAI would also like the federal government to preempt all state-based AI regulation, to manipulate the international copyright system to benefit U.S. AI companies, and to pass a “National Transmission Highway Act, as ambitious as the 1956 National Interstate and Defense Highways Act,” to ensure OpenAI has the necessary energy infrastructure to support its hyperscaling.
To be fair, OpenAI is not alone in asking for the moon. Google, in its own, more sober filing, makes many of the same asks, including for “Balanced copyright rules, such as fair use and text-and-data mining exceptions.”
But Google, unlike OpenAI, has other lines of business and sources of revenue to offset its investment in AI. OpenAI has all but one basket, and all its eggs are in it. And the cracks are starting to show.