{"id":356653,"date":"2025-12-31T01:35:00","date_gmt":"2025-12-31T09:35:00","guid":{"rendered":"https:\/\/cms-articles.softonic.io\/es\/?p=405747"},"modified":"2025-12-31T04:01:44","modified_gmt":"2025-12-31T12:01:44","slug":"openai-has-just-installed-a-security-update-for-chatgpt-that-was-more-than-necessary","status":"publish","type":"post","link":"https:\/\/cms-articles.softonic.io\/en\/openai-has-just-installed-a-security-update-for-chatgpt-that-was-more-than-necessary\/","title":{"rendered":"OpenAI has just installed a security update for ChatGPT that was more than necessary"},"content":{"rendered":"\n<p>OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a significant risk for artificial intelligence agents operating in web browsers. <strong>The company has implemented a security update for its ChatGPT Atlas tool<\/strong> after discovering a new class of attacks during automated internal security simulations.<\/p>\n\n\n<h2 class=\"wp-block-heading\">Not so much intelligence, but very artificial<\/h2>\n\n\n<p>The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as enhanced safeguards. <strong>According to OpenAI, the browser agent mode allows the software to interact on the web<\/strong> in a manner similar to a human user, accessing emails, documents, and web services, which increases its value as a target for adversarial attacks compared to a traditional chatbot that only answers questions.<\/p>\n\n\n<p>The company has developed an automated attacker, using language models that identify prompt injection strategies, allowing for the execution of complex harmful workflows. <strong>This attacker can simulate encounters with malicious content<\/strong>, generating a complete trail of reasoning and actions of the victim agent, which helps refine attacks through multiple rounds of testing.<\/p>\n\n\n<p>A hypothetical example illustrates the risk: a malicious email instructing the agent to send a resignation letter to the user&#8217;s boss. <strong>If the agent encounters this email during a legitimate request, they could misinterpret the instructions<\/strong>, acting to the detriment of the user. This change in the interaction dynamic highlights the need to address new forms of online risk.<\/p>\n\n\n<p>It is not just OpenAI that is facing this problem; the UK&#8217;s National Cyber Security Centre has warned that these attacks may not be completely eliminated, urging organizations to minimize risks and limit impacts. With the introduction of a &#8220;Preparation&#8221; team, OpenAI aims to identify and address these emerging risks in the field of artificial intelligence and cybersecurity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has issued a warning about the growing threat of prompt injection attacks, a technique that hides malicious instructions in ordinary online content, becoming a considerable risk for artificial intelligence agents operating in web browsers. The company has implemented a security update for its ChatGPT Atlas tool after discovering a new class of attacks during automated internal security simulations. Not so much intelligence, but very artificial The updated version of Atlas includes a model specifically trained to withstand adversarial attacks, as well as reinforced safeguards. According to OpenAI, the browser agent mode [&#8230;]<\/p>\n","protected":false},"author":9317,"featured_media":356654,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","wpcf-pageviews":0},"categories":[1015],"tags":[5605,5668],"usertag":[],"vertical":[],"content-category":[],"class_list":["post-356653","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-ciberseguridad","tag-openai"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/356653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/users\/9317"}],"replies":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/comments?post=356653"}],"version-history":[{"count":2,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/356653\/revisions"}],"predecessor-version":[{"id":356661,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/356653\/revisions\/356661"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media\/356654"}],"wp:attachment":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media?parent=356653"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/categories?post=356653"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/tags?post=356653"},{"taxonomy":"usertag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/usertag?post=356653"},{"taxonomy":"vertical","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/vertical?post=356653"},{"taxonomy":"content-category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/content-category?post=356653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}