Fresh off scaring the bejeebers out of many in Hollywood with demos of its text-to-video generator Sora, OpenAI now wants in. According to Bloomberg, top executives at the generative AI developer will hold a round of meetings this week with a number of film studios and Hollywood honchos to discuss what Sora can do for them.
We’ve discussed here before why everything that can be made with AI, will be in Hollywood. So it is no great surprise that studio folks would take the meetings.
But appearing to get cozy with Sora right now carries significant risk for the studios. Generative AI was recently at the center of extensive labor unrest in Hollywood that cost the studios the better part of a year's worth of production. As a result of that unrest, they are also now bound by collective bargaining agreements with writers and actors that circumscribe what they can do unilaterally with tools like Sora. They also face the possibility of further AI-related disruption when the current contract with the Hollywood craft unions expires at the end of July.
Then there’s the tricky question of where Sora came from. OpenAI is facing a passel of copyright infringement lawsuits against it over its use of copyrighted works in training its models, including a multi-billion dollar claim from the New York Times. Insofar as Sora may have been trained on copyrighted videos, it could be a bad look for the studios, and the media conglomerates that own them, to be jumping in bed too quickly with an accused infringer.
In a recent interview with the Wall Street Journal, OpenAI CTO Mira Murati claimed, rather implausibly, not to know — or, less charitably, declined to reveal — whether Sora had been trained on videos scraped from YouTube and other social media platforms.
That likely had something to do with fear of attracting additional litigation. But we may find out anyway. Under the European Union’s AI Act, now poised to go into force, OpenAI could be required to provide a “sufficiently detailed summary” of the datasets used in training its models, which could prove embarrassing to any studio making extensive use of Sora.
In addition, the U.S. Copyright Office is preparing to release a series of reports this year addressing various issues around AI and copyright, along with recommendations to Congress regarding possible changes to copyright law. Among the topics to be addressed is the question of whether works created using AI tools are wholly eligible for copyright protection. If the Office limits eligibility in a way that renders AI generated elements of a movie unprotectable, it could make over-reliance on AI tools in production too risky.
All those risks are for the future, however. For now, the studios are desperate to cut costs. And that immediate need is likely to override any long-term risks.
Watch List
Digital Markets Act Starts to Bite
Speaking of the EU, the Digital Markets Act only went into effect on March 7th, but EU regulators appear to have already had their game faces on. On Monday, the competition authority in Brussels put Meta, Alphabet (Google) and Apple on notice that they are being investigated for possible non-compliance with the law’s stringent requirements. Google and Apple are being investigated for favoring their own apps in their app stores over rivals. Meta is being questioned about its ad-free subscription plan and the use of users’ data for advertising sales. Google is also being probed how it presents search results. “Certain compliance measures fail to achieve their objectives and fall short of expectations,” European Commission EVP Margrethe Vestager said at a news conference. The EU investigations come as Google and Apple are also facing charges in the U.S. over alleged anticompetitive behavior.
Stability Loses Its Balance
First it was OpenAI’s turn to be rocked by internal turmoil over the purpose and direction of the company. Now the wheels seem to be coming off StabilityAI, at least temporarily. The consortium has experienced an exodus of senior executives, most recently including three of its founding scientist. Then on Friday, its CEO Emad Mostaque announced his own resignation. In an echo of OpenAI’s spasm of existential angst, Mostaque’s departure seems to have to do with a disagreement over Stability’s growing commercial focus over its founding ethos of developing a decentralized AI model and open-sourcing its technology. In a series of posts on Xtwitter following his resignation, Mostaque declared “You can’t beat centralized AI with more centralized AI.”