A lengthy article from The New Yorker has called into question the trust in Sam Altman, CEO of OpenAI, revealing accusations of habitual dishonesty. This 16,000-word analysis examines controversial episodes in Altman’s career, including his temporary ousting in 2023 and his return, as well as his conflict with Elon Musk and his transformation from a proponent of safety in artificial intelligence to an ally of Donald Trump. A plan worthy of a James Bond villain One of the most controversial aspects of the article is a failed plan that proposed OpenAI act as a “weapon […]
A lengthy article from The New Yorker has called into question the trust in Sam Altman, CEO of OpenAI, revealing accusations of habitual dishonesty. This 16,000-word analysis examines controversial episodes in Altman’s career, including his temporary ousting in 2023 and his return, as well as his conflict with Elon Musk and his transformation from an advocate for safety in artificial intelligence to an ally of Donald Trump.
A plan worthy of a James Bond villain
One of the most controversial aspects of the article is a failed plan that proposed OpenAI act as a “nuclear weapon” among world leaders. The idea was to force nations to invest in OpenAI technology to avoid falling behind, creating a kind of global competition. Although OpenAI has stated that this characterization of the discussions is “ridiculous,” former employees contradict this version, claiming that conversations about this plan did take place and reached a considerable degree of seriousness.
Greg Brockman, president of OpenAI and a significant donor to Trump’s campaign, reportedly suggested that the company could benefit from playing world powers, such as China and Russia, against each other. Jack Clark, former policy director at OpenAI, described the approach as a “prisoner’s dilemma,” where countries would need to fund the company, implying that failing to do so could have dangerous consequences.
A junior researcher who attended a meeting about the plan expressed that what was being discussed was “completely insane.” The employees’ reaction was so negative that it led to discussions about possible mass resignations. This article provides interesting context about OpenAI’s ambitions and Altman’s reputation in the field of artificial intelligence, at a critical moment for the industry.
There is no news that has impacted the world of technology more this week. OpenAI has confirmed that Sora will stop being supported in the coming months. Additionally, ChatGPT will no longer be able to create video content. Although they have assured that all content created with it can be saved before its closure, it seems that the decision is final: OpenAI is not satisfied with the results. The reasons seem evident. Disney has broken the contract that seemed secure to use OpenAI’s technology for AI video generation with a […]
There is no news that has impacted the world of technology more this week. OpenAI has confirmed that Sora will stop being supported in the coming months. Additionally, ChatGPT will no longer be able to create video content. Although they have assured that all content created with it can be saved before its closure, it seems that the decision is final: OpenAI is not satisfied with the results.
The reasons seem evident. Disney has broken the contract that seemed secure to use OpenAI’s technology for AI video generation worth one billion dollars, making the only source of income for this technology disappear overnight. Leading them to decide to end this feature of their technology. But does this mean the beginning of the end of the AI bubble? Or is it just a problem that many people have been pointing out for years, which is that all these companies do not have a business plan for this technology?
The elephant in the room: a technology in search of a problem
Sora was launched on December 9, 2024, under a basic promise: to be able to create your own videos, in any unimaginable style, from the comfort of your home. Fast, cheap, and efficient, you didn’t need to study or coordinate a team to create your own videos. You just needed the right prompts and to manipulate them to create exactly what you wanted, whenever you wanted. Within certain obvious limitations. The videos could be a maximum of one minute long and, generally, they tended to hallucinate too much to be coherent or sustainable over time.
To all this, another evident problem could be added: most users wanted to use Sora to create content protected either by copyright or by image rights. The latter had a difficult solution and sooner or later it was going to be legislated, as it is illegal to use other people’s images without their permission, but with the former, OpenAI believed they had found the solution: a deal with Disney for one billion dollars. Which, as we have already pointed out, has ultimately not materialized.
The problem with Sora is that it is a technology that offered solutions for a problem that does not exist. The only uses of the technology were either immoral and potentially illegal —using the image of other people without permission is wrong, using it to impersonate them and make them say things they haven’t said, creating porn or spreading hate messages is already illegal in many parts of the world—, or they are legally gray —you cannot use characters for whom you do not have the rights, and the one who would be sued for allowing it is the platform—, or they lacked sense —if you have the image rights or copyright, what prevents you from filming those videos yourself?
Sora was an interesting curiosity, but it did not offer a solution to a real problem. It could make video production cheaper, but even so, there was a problem that we have already pointed out: it had temporal limitations, hallucinated the results a lot, and was unable to maintain consistency between scenes. This made it essentially impossible to create a video longer than an advertisement or a short video of a quality far below any professional quality standard.
Where is the AI business?
The problem with Sora is that it doesn’t have a business plan. It has no reason to exist. The reason for someone to pay for the service is nonexistent. And this has been demonstrated by the fact that there has only been one interested company, and even that company has dropped out of negotiations.
Why? Because it doesn’t solve anything. The technology that is implemented in society and comes to have prevalence is the one that solves some kind of problem. Whether we knew it or not. Sometimes the problem is as simple as things could be done cheaper and faster, but that is also a solution to a problem: production costs. But Sora does not provide any kind of solution because, while it offered the creation of cheap and quick videos, it was always in a legally gray area that is hardly monetizable.
This makes us wonder, is this the future of the rest of AI technologies? And while it is tempting to say yes, it is better to be cautious. Although it is increasingly evident that AI is a bubble, it is also not prudent to claim that there is no real use for it. What is true is that its uses are much more limited, specific, and concrete than what is being sold. LLMs have proven to be tremendously useful for chemistry, weather prediction, and cancer detection, demonstrating where their strengths lie: in contrasting large volumes of information and discovering hidden patterns. Similarly, their particularly notable quality within AIs when it comes to programming has made them the main current tool for vibe coding among professionals and amateurs.
Within general consumption, the only ones that seem to have an efficient ongoing business are Anthropic, and Claude, although marketed as a general AI, is increasingly focused on specific uses derived from efficiency in the workplace. Allowing for automatic responses, summarizing meetings, texts, and videos, and other uses derived from bureaucratic office work, many companies are adopting it despite a relatively high premium subscription compared to its rivals. And they do so because its use feels useful, streamlining certain processes, even if it is questionable whether it truly is due to issues of hallucinations, biases, and the inherent limitations of these systems.
If the death of Sora teaches us anything, it is that AI is a bubble about to burst and that, while it won’t take down all of AI or all LLMs, it will ensure that only those that truly offer something of interest beyond the passing trend will survive. A technology to create videos out of nothing may be interesting, but it cannot be monetized and, above all, it does not contribute anything to society as a whole. And as long as OpenAI remains obsessed with creating something spectacular that reaches the public, whether it is monetizable or not, they will be closer to extinction than to a business that endures over time.
OpenAI has launched Codex Security, an AI-powered security agent designed to identify, validate, and propose solutions to vulnerabilities in systems. This new service, which is available in preview mode for ChatGPT Pro, Enterprise, Business, and Edu users, will offer free access for one month to its innovative features. Reduction of false positives Codex Security is the evolution of Aardvark, presented in private beta in October 2025, with the aim of helping developers and security teams detect and fix vulnerabilities at scale. During its beta phase, Codex Security has scanned […]
OpenAI has launched Codex Security, an AI-powered security agent designed to identify, validate, and propose solutions to vulnerabilities in systems. This new service, which is available in preview mode for ChatGPT Pro, Enterprise, Business, and Edu users, will offer free access for one month to its innovative features.
Reduction of false positives
Codex Security is the evolution of Aardvark, presented in private beta in October 2025, with the aim of helping developers and security teams detect and fix vulnerabilities at scale. During its beta phase, Codex Security has scanned over 1.2 million commits in various open-source projects, identifying 792 critical findings and 10,561 high-severity findings. Among the detected vulnerabilities are issues in popular projects such as OpenSSH, GnuTLS, and PHP.
The company emphasizes that Codex Security combines the reasoning capabilities of its advanced models with automated validation, which minimizes the risk of false positives and delivers practical solutions. An analysis over time in specific repositories has shown an improvement in service accuracy and a 50% reduction in false positive rates.
The operation of Codex Security is based on three stages: first, it analyzes the structure of the repository to create an editable threat model that documents the system’s exposures. Then, it identifies vulnerabilities based on a real context and validates them in an isolated environment. Finally, it proposes solutions that best align with the system’s behavior, facilitating their review and deployment.
The launch of Codex Security comes at a time when competition in the software security field is increasing, especially after the recent launch of Claude Code Security by Anthropic, another agent that helps scan for vulnerabilities in software codebases.
This month, Google and OpenAI have taken a significant step by launching new AI-driven advertising offerings, after years of resistance. This decision responds to the growing pressure to monetize their platforms, driven by rising operational costs and the need to remain competitive in a rapidly evolving market. As AI platforms are forced to adopt advertising business models, it is expected that other players in the sector will follow the same path. The monetization of AI Despite the fear that advertising may drive users away, […]
This month, Google and OpenAI have taken a significant step by launching new AI-driven advertising offerings, after years of resistance. This decision responds to the growing pressure to monetize their platforms, driven by rising operational costs and the need to remain competitive in a rapidly evolving market. As AI platforms are forced to adopt advertising business models, it is expected that other players in the sector will follow suit.
Monetization of AI
Despite the fear that advertising may drive users away, Google and OpenAI have chosen to integrate paid advertising into their strategies. It is estimated that in the next four years, approximately 40 million people in the United States will become users of generative AI, which will directly influence the advertising strategies of companies. Thus, despite previous doubts, monetization becomes an urgent necessity for these platforms.
Experts indicate that, although brands and retailers rush to integrate AI-based advertising into their paid search strategies, it will be the organic optimization teams that are likely to see the most immediate returns. This shift in focus on monetization comes with projections that major artificial intelligence companies will increase their capital expenditures to over $375 billion by 2025, emphasizing that monetization is no longer an option, but a necessity.
The reasons behind the slow pace of selling advertising are due to concerns about how it could affect the growing user base. However, economic pressure may push more AI platforms to reconsider their monetization strategy in the near future.
OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a considerable risk for artificial intelligence agents operating in web browsers. The company has implemented a security update for its ChatGPT Atlas tool after discovering a new class of attacks during automated internal security simulations. Not so much intelligence, but very artificial The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as reinforced safeguards. According to OpenAI, the browser agent mode […]
OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a significant risk for artificial intelligence agents operating in web browsers. The company has implemented a security update for its ChatGPT Atlas tool after discovering a new class of attacks during automated internal security simulations.
Not so much intelligence, but very artificial
The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as enhanced safeguards. According to OpenAI, the browser agent mode allows the software to interact on the web in a manner similar to a human user, accessing emails, documents, and web services, which increases its value as a target for adversarial attacks compared to a traditional chatbot that only answers questions.
The company has developed an automated attacker, using language models that identify prompt injection strategies, allowing for the execution of complex harmful workflows. This attacker can simulate encounters with malicious content, generating a complete trail of reasoning and actions of the victim agent, which helps refine attacks through multiple rounds of testing.
A hypothetical example illustrates the risk: a malicious email instructing the agent to send a resignation letter to the user’s boss. If the agent encounters this email during a legitimate request, they could misinterpret the instructions, acting to the detriment of the user. This change in the interaction dynamic highlights the need to address new forms of online risk.
It is not just OpenAI that is facing this problem; the UK’s National Cyber Security Centre has warned that these attacks may not be completely eliminated, urging organizations to minimize risks and limit impacts. With the introduction of a “Preparation” team, OpenAI aims to identify and address these emerging risks in the field of artificial intelligence and cybersecurity.
Hannah Wong, Director of Communications at OpenAI, has announced that she will leave the company at the end of January 2025, after nearly five years in the role. Wong joined OpenAI in 2021, coming from Apple, when the company was still a small research lab. Her leadership has been crucial during a period of significant growth and interest in OpenAI, especially following the launch of ChatGPT. Under her direction, the communications team expanded from about eight to more than 50 employees across various regions, including the United States, Europe, and Asia. Hannah Wong announces her departure from […]
Hannah Wong, Director of Communications at OpenAI, has announced that she will leave the company at the end of January 2025, after nearly five years in the position.
Wong joined OpenAI in 2021, coming from Apple, when the company was still a small research lab. His leadership has been crucial during a period of significant growth and interest in OpenAI, especially following the launch of ChatGPT.
Under her leadership, the communications team expanded from about eight to more than 50 employees across various regions, including the United States, Europe, and Asia.
Hannah Wong told staff she is moving on to her “next chapter.” The company will be running an executive search to find a replacement, according to a memo. https://t.co/zLoz6H0Llq
Hannah Wong announces her departure from OpenAI after nearly five years
Wong has commented that his decision has been personal, allowing him to take a step back to dedicate time to his family before taking on his next professional challenge.
In the meantime, Lindsey Held Bolton, Vice President of Communications, will take over the leadership of the communications team, while Kate Rouch, Director of Marketing, will be responsible for the search for the next Director of Communications.
The departure of Wong was announced in the same week that OpenAI confirmed that George Osborne, former Chancellor of the United Kingdom, will be the new Head of OpenAI for Countries. Osborne will play a key role collaborating with governments to modernize systems and facilitate the integration of artificial intelligence into public services.
Hannah Wong ha comunicado al personal que pasa a su "siguiente capítulo". La empresa iniciará una búsqueda de ejecutivos para encontrar un sustituto. https://t.co/zwa6M5IZ6J
In addition, OpenAI is involved in several significant projects, such as the Stargate Project, valued at 500 billion dollars, the launch of Sora 2, and the update of GPT-5.2.
Possible restructurings within the company have also been mentioned ahead of a rumored initial public offering (IPO). Sam Altman and Fidji Simo, leaders of OpenAI, have acknowledged Wong’s impact on the public perception of the company, highlighting his significant legacy in communicating OpenAI’s mission and achievements.
OpenAI has announced a notable increase in the performance of its artificial intelligence model, GPT-5.1-Codex-Max, which has reached 76% in capability assessments. This advancement suggests a significant leap in the model’s ability to understand and generate natural language, which could have important implications for various applications, from task automation to content creation. Increase in use in work environments According to data provided by the company, this improvement is evident in a series of tests that evaluate the accuracy, relevance, and creativity of the responses generated by the model. The increase […]
OpenAI has announced a notable increase in the performance of its artificial intelligence model, GPT-5.1-Codex-Max, which has reached 76% in capability assessments. This advancement suggests a significant leap in the model’s ability to understand and generate natural language, which could have important implications for various applications, from task automation to content creation.
Increase in usage in work environments
According to the data provided by the company, this improvement is evidenced in a series of tests that evaluate the accuracy, relevance, and creativity of the responses generated by the model. The increase in performance highlights OpenAI’s commitment to the continuous development of technologies that optimize human-machine interaction, facilitating richer and more effective experiences for users.
However, along with the enthusiasm for these improvements, OpenAI has also issued warnings about the cybersecurity risks that could arise from the use of its technology. The company emphasizes the need to be cautious about the potential misuse of artificial intelligence, which could lead to the generation of misinformation or the creation of misleading content, thus posing a challenge for both developers and users of this advanced technology.
These developments coincide with a growing interest in the tech sector for the use of artificial intelligence in the business and creative fields. Lthe possibility that GPT-5.1-Codex-Max could influence the way content is produced and consumed is undeniable, but it also requires a critical analysis of how to ensure that these tools are used ethically and responsibly.
The parents of Adam Raine, a 16-year-old who committed suicide in April 2025, have filed a lawsuit against OpenAI, claiming that the ChatGPT artificial intelligence provided him with instructions on methods of suicide. Reports indicate that Raine began using the chatbot in September 2024 and had shared his suicidal thoughts with the platform at the end of that year. Sam Altman has no heart In response to the lawsuit, OpenAI has stated that it is not responsible for Raine’s death, arguing that his suicide was the result of “misuse” of the chatbot. The company […]
The parents of Adam Raine, a 16-year-old teenager who committed suicide in April 2025, have filed a lawsuit against OpenAI, alleging that the ChatGPT artificial intelligence provided him with instructions on methods of suicide. According to reports, Raine began using the chatbot in September 2024 and had shared his suicidal thoughts with the platform at the end of that year.
Sam Altman has no heart
In response to the lawsuit, OpenAI has claimed that it is not responsible for Raine’s death, arguing that his suicide was the result of “misuse” of the chatbot. The company argues that the young man violated the terms of service by seeking information about suicide. OpenAI also points out that Raine already had suicidal thoughts before interacting with the tool and that he had tried to seek help from other sources without success.
A statement from his lawyer, Jay Edelson, describes OpenAI’s response as “disturbing,” arguing that the company is trying to shift the blame onto others, including Raine himself. OpenAI has expressed its condolences to the family and has indicated that the situation requires a full consideration of the facts, suggesting that the lawsuit does not present all the relevant details about the young man’s mental health.
The case has led OpenAI to review its policies regarding the use of the chatbot in mental health issues. In September 2025, the company announced that ChatGPT would no longer be able to discuss suicide topics with minors under 18. However, one month later, OpenAI announced that it would relax some restrictions affecting the chatbot’s usefulness for users without mental health issues and that it would allow the creation of “erotic” content for verified adult users starting in December.
Generative artificial intelligence is causing a stir in Japan’s creative industry, with growing unease among anime professionals and publishers. Recently, the release of Sora 2 by OpenAI has intensified these concerns, leading prominent voice actors and renowned studios like Toei Animation and Studio Ghibli to unite in defense of their copyright. All major companies say no to generative AI. Studios and publishers, including Shueisha, Kodansha, and Kadokawa, have issued statements calling for the implementation of stricter laws for […]
Generative artificial intelligence is causing a stir in Japan’s creative industry, with growing unrest among anime professionals and publishers. Recently, the release of Sora 2 by OpenAI has intensified these concerns, leading prominent voice actors and renowned studios like Toei Animation and Studio Ghibli to unite in defense of their copyright rights.
All major companies say no to generative AI
Studios and publishers, including Shueisha, Kodansha, and Kadokawa, have issued statements requesting the implementation of stricter laws to protect their creations against what they perceive as infringements by OpenAI. This call to action arose after the presentation of Sora 2, whose AI model for creating animation generated criticism for its striking similarity to established works, such as Blue Exorcist.
The coalition is promoting cooperation among studios, publishers, and government agencies, seeking to establish a united front against copyright infringements in the field of artificial intelligence. They believe it is necessary for the Japanese government to take decisive action, as the situation has reached a critical point.
Despite the opposition to copyright exploitation, some publishers, such as Shueisha, have expressed their willingness to incorporate AI in specific tasks, such as translation and labeling, although with a defensive approach towards potential copyright violations. According to this approach, while some technological advancements may be welcomed, when copyright rights are threatened, defense becomes a priority.
The controversy continues to grow, and there are rumors that pressure on the Japanese government could accelerate the regulation of the use of artificial intelligence in the creative industry. With the support of big names like Square Enix and the Japanese Animation Association, the message is clear: the protection of intellectual property is fundamental in this new digital era.
George R.R. Martin, known for his series A Song of Ice and Fire, along with other authors, has sued OpenAI, the company behind ChatGPT, claiming that it has infringed on their copyright. This legal action is relevant in the context of the growing concern about the use of artificial intelligence and copyright-protected content. The plagiarism machine has a hard time proving that it does not plagiarize. The lawsuit is based on three theories of infringement. First, it is argued that the use of copyrighted books to train AI models constitutes a violation of rights. Second […]
George R.R. Martin, known for his series A Song of Ice and Fire, along with other authors, has sued OpenAI, the company behind ChatGPT, claiming that it has infringed their copyright. This legal action is relevant in the context of the growing concern about the use of artificial intelligence and copyright-protected content.
The plagiarism machine has a hard time proving that it does not plagiarize
The lawsuit is based on three theories of infringement. First, it is argued that the use of copyrighted books to train AI models constitutes a violation of rights. Second, it has been claimed that OpenAI has accessed books through underground libraries, which would also qualify as piracy. Finally, it is suggested that the responses generated by ChatGPT are substantially similar to the original works of Martin and other authors, a point that the judges have considered significant in stating that detailed summaries could be easily identified as similar to the original books.
The damages for this violation could amount to $150,000, highlighting the seriousness of the allegations. The next phase of the judicial process involves a summary trial, where it will be decided which claims will go to trial, which will determine the course of the case and could set an important precedent at the intersection of AI and intellectual property.
Meanwhile, fans of Martin can look forward to his return to the world of Westeros with the series A Knight of the Seven Kingdoms, which will premiere on January 18, 2026. The series will be based on the Dunk and Egg novels and will take place 100 years before the events of Game of Thrones, following the adventures of the knight Ser Duncan the Tall and his young squire, Egg.