There is no news that has impacted the world of technology more this week. OpenAI has confirmed that Sora will stop being supported in the coming months. Additionally, ChatGPT will no longer be able to create video content. Although they have assured that all content created with it can be saved before its closure, it seems that the decision is final: OpenAI is not satisfied with the results.
The reasons seem evident. Disney has broken the contract that seemed secure to use OpenAI’s technology for AI video generation worth one billion dollars, making the only source of income for this technology disappear overnight. Leading them to decide to end this feature of their technology. But does this mean the beginning of the end of the AI bubble? Or is it just a problem that many people have been pointing out for years, which is that all these companies do not have a business plan for this technology?
The elephant in the room: a technology in search of a problem
Sora was launched on December 9, 2024, under a basic promise: to be able to create your own videos, in any unimaginable style, from the comfort of your home. Fast, cheap, and efficient, you didn’t need to study or coordinate a team to create your own videos. You just needed the right prompts and to manipulate them to create exactly what you wanted, whenever you wanted. Within certain obvious limitations. The videos could be a maximum of one minute long and, generally, they tended to hallucinate too much to be coherent or sustainable over time.
To all this, another evident problem could be added: most users wanted to use Sora to create content protected either by copyright or by image rights. The latter had a difficult solution and sooner or later it was going to be legislated, as it is illegal to use other people’s images without their permission, but with the former, OpenAI believed they had found the solution: a deal with Disney for one billion dollars. Which, as we have already pointed out, has ultimately not materialized.
The problem with Sora is that it is a technology that offered solutions for a problem that does not exist. The only uses of the technology were either immoral and potentially illegal —using the image of other people without permission is wrong, using it to impersonate them and make them say things they haven’t said, creating porn or spreading hate messages is already illegal in many parts of the world—, or they are legally gray —you cannot use characters for whom you do not have the rights, and the one who would be sued for allowing it is the platform—, or they lacked sense —if you have the image rights or copyright, what prevents you from filming those videos yourself?
Sora was an interesting curiosity, but it did not offer a solution to a real problem. It could make video production cheaper, but even so, there was a problem that we have already pointed out: it had temporal limitations, hallucinated the results a lot, and was unable to maintain consistency between scenes. This made it essentially impossible to create a video longer than an advertisement or a short video of a quality far below any professional quality standard.
Where is the AI business?
The problem with Sora is that it doesn’t have a business plan. It has no reason to exist. The reason for someone to pay for the service is nonexistent. And this has been demonstrated by the fact that there has only been one interested company, and even that company has dropped out of negotiations.
Why? Because it doesn’t solve anything. The technology that is implemented in society and comes to have prevalence is the one that solves some kind of problem. Whether we knew it or not. Sometimes the problem is as simple as things could be done cheaper and faster, but that is also a solution to a problem: production costs. But Sora does not provide any kind of solution because, while it offered the creation of cheap and quick videos, it was always in a legally gray area that is hardly monetizable.
This makes us wonder, is this the future of the rest of AI technologies? And while it is tempting to say yes, it is better to be cautious. Although it is increasingly evident that AI is a bubble, it is also not prudent to claim that there is no real use for it. What is true is that its uses are much more limited, specific, and concrete than what is being sold. LLMs have proven to be tremendously useful for chemistry, weather prediction, and cancer detection, demonstrating where their strengths lie: in contrasting large volumes of information and discovering hidden patterns. Similarly, their particularly notable quality within AIs when it comes to programming has made them the main current tool for vibe coding among professionals and amateurs.
Within general consumption, the only ones that seem to have an efficient ongoing business are Anthropic, and Claude, although marketed as a general AI, is increasingly focused on specific uses derived from efficiency in the workplace. Allowing for automatic responses, summarizing meetings, texts, and videos, and other uses derived from bureaucratic office work, many companies are adopting it despite a relatively high premium subscription compared to its rivals. And they do so because its use feels useful, streamlining certain processes, even if it is questionable whether it truly is due to issues of hallucinations, biases, and the inherent limitations of these systems.
If the death of Sora teaches us anything, it is that AI is a bubble about to burst and that, while it won’t take down all of AI or all LLMs, it will ensure that only those that truly offer something of interest beyond the passing trend will survive. A technology to create videos out of nothing may be interesting, but it cannot be monetized and, above all, it does not contribute anything to society as a whole. And as long as OpenAI remains obsessed with creating something spectacular that reaches the public, whether it is monetizable or not, they will be closer to extinction than to a business that endures over time.