OpenAI launches Codex Security: An AI agent to combat vulnerabilities

OpenAI has launched Codex Security, an AI-powered security agent designed to identify, validate, and propose solutions to vulnerabilities in systems. This new service, which is available in preview mode for ChatGPT Pro, Enterprise, Business, and Edu users, will offer free access for one month to its innovative features. Reduction of false positives Codex Security is the evolution of Aardvark, presented in private beta in October 2025, with the aim of helping developers and security teams detect and fix vulnerabilities at scale. During its beta phase, Codex Security has scanned […]

OpenAI has launched Codex Security, an AI-powered security agent designed to identify, validate, and propose solutions to vulnerabilities in systems. This new service, which is available in preview mode for ChatGPT Pro, Enterprise, Business, and Edu users, will offer free access for one month to its innovative features.

Reduction of false positives

Codex Security is the evolution of Aardvark, presented in private beta in October 2025, with the aim of helping developers and security teams detect and fix vulnerabilities at scale. During its beta phase, Codex Security has scanned over 1.2 million commits in various open-source projects, identifying 792 critical findings and 10,561 high-severity findings. Among the detected vulnerabilities are issues in popular projects such as OpenSSH, GnuTLS, and PHP.

The company emphasizes that Codex Security combines the reasoning capabilities of its advanced models with automated validation, which minimizes the risk of false positives and delivers practical solutions. An analysis over time in specific repositories has shown an improvement in service accuracy and a 50% reduction in false positive rates.

The operation of Codex Security is based on three stages: first, it analyzes the structure of the repository to create an editable threat model that documents the system’s exposures. Then, it identifies vulnerabilities based on a real context and validates them in an isolated environment. Finally, it proposes solutions that best align with the system’s behavior, facilitating their review and deployment.

The launch of Codex Security comes at a time when competition in the software security field is increasing, especially after the recent launch of Claude Code Security by Anthropic, another agent that helps scan for vulnerabilities in software codebases.

Google and OpenAI adopt AI-based advertising

This month, Google and OpenAI have taken a significant step by launching new AI-driven advertising offerings, after years of resistance. This decision responds to the growing pressure to monetize their platforms, driven by rising operational costs and the need to remain competitive in a rapidly evolving market. As AI platforms are forced to adopt advertising business models, it is expected that other players in the sector will follow the same path. The monetization of AI Despite the fear that advertising may drive users away, […]

This month, Google and OpenAI have taken a significant step by launching new AI-driven advertising offerings, after years of resistance. This decision responds to the growing pressure to monetize their platforms, driven by rising operational costs and the need to remain competitive in a rapidly evolving market. As AI platforms are forced to adopt advertising business models, it is expected that other players in the sector will follow suit.

Monetization of AI

Despite the fear that advertising may drive users away, Google and OpenAI have chosen to integrate paid advertising into their strategies. It is estimated that in the next four years, approximately 40 million people in the United States will become users of generative AI, which will directly influence the advertising strategies of companies. Thus, despite previous doubts, monetization becomes an urgent necessity for these platforms.

Experts indicate that, although brands and retailers rush to integrate AI-based advertising into their paid search strategies, it will be the organic optimization teams that are likely to see the most immediate returns. This shift in focus on monetization comes with projections that major artificial intelligence companies will increase their capital expenditures to over $375 billion by 2025, emphasizing that monetization is no longer an option, but a necessity.

The reasons behind the slow pace of selling advertising are due to concerns about how it could affect the growing user base. However, economic pressure may push more AI platforms to reconsider their monetization strategy in the near future.

OpenAI has just installed a security update for ChatGPT that was more than necessary

OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a considerable risk for artificial intelligence agents operating in web browsers. The company has implemented a security update for its ChatGPT Atlas tool after discovering a new class of attacks during automated internal security simulations. Not so much intelligence, but very artificial The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as reinforced safeguards. According to OpenAI, the browser agent mode […]

OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a significant risk for artificial intelligence agents operating in web browsers. The company has implemented a security update for its ChatGPT Atlas tool after discovering a new class of attacks during automated internal security simulations.

Not so much intelligence, but very artificial

The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as enhanced safeguards. According to OpenAI, the browser agent mode allows the software to interact on the web in a manner similar to a human user, accessing emails, documents, and web services, which increases its value as a target for adversarial attacks compared to a traditional chatbot that only answers questions.

The company has developed an automated attacker, using language models that identify prompt injection strategies, allowing for the execution of complex harmful workflows. This attacker can simulate encounters with malicious content, generating a complete trail of reasoning and actions of the victim agent, which helps refine attacks through multiple rounds of testing.

A hypothetical example illustrates the risk: a malicious email instructing the agent to send a resignation letter to the user’s boss. If the agent encounters this email during a legitimate request, they could misinterpret the instructions, acting to the detriment of the user. This change in the interaction dynamic highlights the need to address new forms of online risk.

It is not just OpenAI that is facing this problem; the UK’s National Cyber Security Centre has warned that these attacks may not be completely eliminated, urging organizations to minimize risks and limit impacts. With the introduction of a “Preparation” team, OpenAI aims to identify and address these emerging risks in the field of artificial intelligence and cybersecurity.

ChatGPT achieves a 76% increase in its performance

OpenAI has announced a notable increase in the performance of its artificial intelligence model, GPT-5.1-Codex-Max, which has reached 76% in capability assessments. This advancement suggests a significant leap in the model’s ability to understand and generate natural language, which could have important implications for various applications, from task automation to content creation. Increase in use in work environments According to data provided by the company, this improvement is evident in a series of tests that evaluate the accuracy, relevance, and creativity of the responses generated by the model. The increase […]

OpenAI has announced a notable increase in the performance of its artificial intelligence model, GPT-5.1-Codex-Max, which has reached 76% in capability assessments. This advancement suggests a significant leap in the model’s ability to understand and generate natural language, which could have important implications for various applications, from task automation to content creation.

Increase in usage in work environments

According to the data provided by the company, this improvement is evidenced in a series of tests that evaluate the accuracy, relevance, and creativity of the responses generated by the model. The increase in performance highlights OpenAI’s commitment to the continuous development of technologies that optimize human-machine interaction, facilitating richer and more effective experiences for users.

However, along with the enthusiasm for these improvements, OpenAI has also issued warnings about the cybersecurity risks that could arise from the use of its technology. The company emphasizes the need to be cautious about the potential misuse of artificial intelligence, which could lead to the generation of misinformation or the creation of misleading content, thus posing a challenge for both developers and users of this advanced technology.

These developments coincide with a growing interest in the tech sector for the use of artificial intelligence in the business and creative fields. Lthe possibility that GPT-5.1-Codex-Max could influence the way content is produced and consumed is undeniable, but it also requires a critical analysis of how to ensure that these tools are used ethically and responsibly.