{"id":347663,"date":"2025-09-16T05:55:00","date_gmt":"2025-09-16T12:55:00","guid":{"rendered":"https:\/\/cms-articles.softonic.io\/es\/?p=396080"},"modified":"2025-09-16T06:11:25","modified_gmt":"2025-09-16T13:11:25","slug":"openai-and-anthropic-are-working-with-the-governments-of-the-united-states-and-the-united-kingdom-to-ensure-the-safety-of-ai","status":"publish","type":"post","link":"https:\/\/cms-articles.softonic.io\/en\/openai-and-anthropic-are-working-with-the-governments-of-the-united-states-and-the-united-kingdom-to-ensure-the-safety-of-ai\/","title":{"rendered":"OpenAI and Anthropic are working with the governments of the United States and the United Kingdom to ensure the safety of AI"},"content":{"rendered":"\n<p>OpenAI and Anthropic have announced their collaboration with the governments of the United States and the United Kingdom with the aim of strengthening the security of their language models. <strong>Through a series of initiatives, the two companies are allowing government researchers to assess the vulnerability of their systems to potential cyber attacks<\/strong>.<\/p>\n\n\n<h2 class=\"wp-block-heading\">A noble end that has more behind it than it seems<\/h2>\n\n\n<p>In recent posts on their blogs, OpenAI and Anthropic revealed that they have been working with the National Institute of Standards and Technology (NIST) and the UK AI Safety Institute. <strong>This cooperation includes access to models, classifiers, and training data<\/strong>, allowing independent experts to examine the resilience of these models against external attacks and their effectiveness in preventing ethically questionable uses.<\/p>\n\n\n<p>OpenAI identified critical vulnerabilities that could allow sophisticated attackers to take control of computer systems and impersonate users, with a success rate of 50% in an AI hijacking method.<strong> Although engineers initially believed these vulnerabilities were irrelevant<\/strong>, the research showed that their combination with hijacking techniques could be effective.<\/p>\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Introducing GPT-5\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/boJG84Jcf-4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n<p>Both OpenAI and Anthropic are implementing &#8220;red-teaming&#8221; processes to quickly identify and fix these vulnerabilities, aiming to prevent the misuse of their technology. However, <strong>some security experts have expressed concern that this collaboration could lead to a decrease in attention to technical security<\/strong>, due to increased competitiveness in the global market.<\/p>\n\n\n<p>However, researchers like Md Raz, <strong>a PhD student at New York University, argue that the models are becoming more resilient and harder to breach with each new version<\/strong>, suggesting a more rigorous approach to security in the latest developments like GPT-5.<\/p>\n\n<div class=\"sc-card-program\">\r\n  <div class=\"sc-card-program__body\">\r\n    <div class=\"sc-card-program__row clearfix\">\r\n      <div class=\"sc-card-program__col-logo\">\r\n        <img decoding=\"async\" class=\"sc-card-program__img\" alt=\"ChatGPT\" src=\"https:\/\/images.sftcdn.net\/images\/t_app-icon-s\/p\/b330d2b7-464c-4693-b81d-2c97b1edf062\/857405465\/chatgpt-logo\" width=\"100px\" height=\"100px\">\r\n      <\/div>\r\n      <div class=\"sc-card-program__col-title\">\r\n        <span class=\"sc-card-program__title\">ChatGPT<\/span>\r\n        <a class=\"sc-card-program__button sc-card-program-internal\" href=\"https:\/\/chatgpt.softonic.com\/iphone\" target=\"_self\" rel=\"noopener noreferrer\">DOWNLOAD<\/a>\r\n      <\/div>\r\n      <div class=\"sc-card-program__col-rating\">\r\n        <svg class=\"rating-score__content\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" version=\"1.1\" x=\"0\" y=\"0\" viewbox=\"0 0 50 50\" enable-background=\"new 0 0 50 50\" xml:space=\"preserve\"><path class=\"rating-score__background rating-score--good\" fill=\"none\" stroke-width=\"6\" stroke-miterlimit=\"10\" d=\"M40 40c8.3-8.3 8.3-21.7 0-30s-21.7-8.3-30 0 -8.3 21.7 0 30\"><\/path><path class=\"rating-score__value rating-score__value--0\" fill=\"none\" stroke-width=\"6\" stroke-dashoffset=\"0\" stroke-miterlimit=\"10\" d=\"M40 40c8.3-8.3 8.3-21.7 0-30s-21.7-8.3-30 0 -8.3 21.7 0 30\"><\/path><text class=\"rating-score__number\" content=\"\" text-anchor=\"middle\" transform=\"matrix(1 0 0 1 25 31.0837)\" data-auto=\"app-user-score\"><\/text><\/svg>\r\n      <\/div>\r\n    <\/div>\r\n    <div class=\"sc-card-program__row\">\r\n      <span class=\"sc-card-program__description\"><\/span>\r\n    <\/div>\r\n    <div class=\"sc-card-program__row\">\r\n      <img decoding=\"async\" class=\"sc-card-program__bigpic\" src=\"\" onerror=\"this.style.display='none'\">\r\n    <\/div>\r\n    <a class=\"sc-card-program__link track-link sc-card-program-internal\" href=\"https:\/\/chatgpt.softonic.com\/iphone\" target=\"_self\" rel=\"noopener noreferrer\"><\/a>\r\n  <\/div>\r\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI and Anthropic have announced their collaboration with the governments of the United States and the United Kingdom with the aim of strengthening the security of their language models. Through a series of initiatives, the two companies are allowing government researchers to assess the vulnerability of their systems to potential cyber attacks. A noble goal that has more behind it than it seems In recent posts on their blogs, OpenAI and Anthropic revealed that they have been working with the National Institute of Standards and Technology (NIST) and the UK AI Security Institute. This cooperation includes the [&hellip;]<\/p>\n","protected":false},"author":9318,"featured_media":347664,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","wpcf-pageviews":0},"categories":[1015],"tags":[4753,5605,3992,15071,3854,16124,16125,16126,5668,16127,4175,16128],"usertag":[],"vertical":[],"content-category":[7176],"class_list":["post-347663","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-anthropic","tag-ciberseguridad","tag-estados-unidos","tag-gpt-5","tag-ia","tag-md-raz","tag-modelos-de-lenguaje","tag-nist","tag-openai","tag-red-teaming","tag-reino-unido","tag-universidad-de-nueva-york","content-category-seguridad-privacidad"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/347663","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/users\/9318"}],"replies":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/comments?post=347663"}],"version-history":[{"count":2,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/347663\/revisions"}],"predecessor-version":[{"id":347674,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/347663\/revisions\/347674"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media\/347664"}],"wp:attachment":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media?parent=347663"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/categories?post=347663"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/tags?post=347663"},{"taxonomy":"usertag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/usertag?post=347663"},{"taxonomy":"vertical","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/vertical?post=347663"},{"taxonomy":"content-category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/content-category?post=347663"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}