{"id":292378,"date":"2024-11-30T09:49:00","date_gmt":"2024-11-30T08:49:00","guid":{"rendered":"https:\/\/sftarticles.wpenginepowered.com\/es\/?p=344842"},"modified":"2025-07-01T15:40:49","modified_gmt":"2025-07-01T22:40:49","slug":"this-is-how-governments-can-tame-ai-before-its-too-late","status":"publish","type":"post","link":"https:\/\/cms-articles.softonic.io\/en\/this-is-how-governments-can-tame-ai-before-its-too-late\/","title":{"rendered":"This is how governments can tame AI before it&#039;s too late"},"content":{"rendered":"\n<p>AI systems are integrated into almost every facet of modern life. They suggest which shows and movies you should watch, even help employers decide who they want to hire.<\/p>\n\n\n<div class=\"sc-card-starred-link\">\r\n  <div class=\"sc-card-starred-link__body\">\r\n    <div class=\"sc-card-starred-link__row clearfix\">\r\n      <div class=\"sc-card-starred-link__col-logo\">\r\n        <img decoding=\"async\" class=\"sc-card-starred-link__img\" src=\"https:\/\/articles-img.sftcdn.net\/sft\/articles\/auto-mapping-folder\/sites\/3\/2024\/09\/newsletter.png?GoogleAccessId=wp-stateless%40kubertonic.iam.gserviceaccount.com&Expires=1778800110&Signature=NcbMWMsoRHnMlKT%2B1BA8WOojbBwfsLnPzFBMgRfwQP%2FzK%2B9iYbtVZpXFxauLOthpBWgNoBwUGthvdW%2FGGWRYwMprfJit2qi5uqhKfe3usvGsGAb2unUtsSunesFt1gfP%2FuMQVNI8q4VEIdOdz2x0fpkYep9Qp7X%2BBjSrnv0%2BpBK2RXQimPl6si310sUrXLFyuKLVoVfg0qyv5C%2FB4AuYAnjMiETEOC0yaoYxjHSB5Qqc0MI%2FMP41FRTliRKDVj7JqZYQgEs1u2bEEOKyyT77IxDf4FAItBd5dERUb%2FEPXLbKEUMJF9DHxWMZWYdfaipUPiZeFrqmjEfU4LMaSDQjyw%3D%3D\" width=\"100px\" height=\"100px\">\r\n      <\/div>\r\n      <div class=\"sc-card-starred-link__col-title\">\r\n        <p class=\"sc-card-starred-link__title\">Subscribe to the Softonic newsletter and get the latest in tech, gaming, entertainment and deals right in your inbox.<\/p>\r\n        <a class=\"sc-card-starred-link__button\" href=\"https:\/\/en.softonic.beehiiv.com\/subscribe\" target=\"_blank\" rel=\"noopener noreferrer sponsored\">Subscribe (it's FREE) \u25ba<\/a>\r\n      <\/div>\r\n    <\/div>\r\n    <a class=\"sc-card-starred-link__link\" href=\"https:\/\/en.softonic.beehiiv.com\/subscribe\" target=\"_blank\" rel=\"noopener noreferrer sponsored\"><\/a>\r\n  <\/div>\r\n<\/div>\n\n\n\n<p>But, what happens when these systems, often considered neutral, start making decisions that disadvantage certain groups or, worse yet, cause harm in the real world? <strong>This question is asked by thousands of professionals.<\/strong><\/p>\n\n\n\n<p>The often overlooked consequences of AI applications demand regulatory frameworks that can keep pace with this rapidly evolving technology. And an expert in this field, Sylvia Lu, has <a href=\"https:\/\/scholar.google.com.au\/citations?hl=en&amp;user=1jTquTsAAAAJ&amp;view_op=list_works&amp;sortby=pubdate\">studied the intersection between law and technology<\/a> to outline <a href=\"https:\/\/dx.doi.org\/10.2139\/ssrn.4949052\" target=\"_blank\" rel=\"noopener nofollow\" title=\"\">a legal framework<\/a> to do just that.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/eXdVDhOGqoE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Regulation that never moves as fast as innovation<\/h2>\n\n\n\n<p>Despite these growing dangers, legal frameworks around the world have struggled to keep up. <strong>In the United States, a regulatory approach that emphasizes innovation has made it difficult to impose strict standards <\/strong>on how these systems are used in multiple contexts.<\/p>\n\n\n\n<p>Courts and regulatory bodies are accustomed to dealing with concrete damages, but algorithmic damages are often more subtle, cumulative, and difficult to detect. <strong>Regulations often do not address the broader effects that AI systems can have over time.<\/strong><\/p>\n\n\n\n<p>The algorithms of social networks, for example, can gradually erode users&#8217; mental health, but since these damages accumulate slowly, they are difficult to address within the limits of current legal norms.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">AI harms aren&#39;t obvious or immediate. They&#39;re insidious, building over time. <a href=\"https:\/\/t.co\/xSrWkflVQL\">https:\/\/t.co\/xSrWkflVQL<\/a><\/p>&mdash; Gizmodo (@Gizmodo) <a href=\"https:\/\/twitter.com\/Gizmodo\/status\/1862481817699303799?ref_src=twsrc%5Etfw\">November 29, 2024<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Create true accountability<\/h2>\n\n\n\n<p>Categorizing the types of algorithmic damages outlines the legal boundaries of AI regulation and presents possible legal reforms to close this liability gap.<\/p>\n\n\n\n<p>The changes that the expert believes would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality, and security, before and after its deployment.<\/p>\n\n\n\n<p>For example, <strong>companies that use facial recognition systems would have to assess the impact of these systems throughout their lifecycle.<\/strong><\/p>\n\n\n\n<p>Another useful change would be <strong>the strengthening of individual rights regarding the use of AI systems<\/strong>, allowing people to opt out of harmful practices and making certain AI applications opt-in.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">New study finds that AI models consistently escalated to war and would deploy nukes without warning in a series of conflict simulations<br><br>(via <a href=\"https:\/\/twitter.com\/VICE?ref_src=twsrc%5Etfw\">@VICE<\/a>) <a href=\"https:\/\/t.co\/pU0LzjKIC0\">pic.twitter.com\/pU0LzjKIC0<\/a><\/p>&mdash; Culture Crave ? (@CultureCrave) <a href=\"https:\/\/twitter.com\/CultureCrave\/status\/1763092647944892506?ref_src=twsrc%5Etfw\">February 29, 2024<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n\n\n\n<p>For example, <strong>require an opt-in regime for data processing by companies using facial recognition systems<\/strong> and allow users to opt-out at any time.<\/p>\n\n\n\n<p>Finally, the expert suggests requiring companies to disclose the use of AI technology and its anticipated harms. For example, this could include notifying customers about the use of facial recognition systems and the anticipated harms in the areas described in the typology.<\/p>\n\n\n\n<p><strong>As the use of AI systems in critical social functions becomes widespread <\/strong>(from healthcare to education and employment),<strong> the need to regulate the harms they can cause becomes more urgent<\/strong>. Without intervention, it is likely that these invisible harms will continue to accumulate, affecting almost everyone and disproportionately impacting the most vulnerable.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/hashtag\/AI?src=hash&amp;ref_src=twsrc%5Etfw\">#AI<\/a> harm is often behind the scenes and builds over time \u2013 a legal scholar explains how the law can adapt to respond <br><br>4 types of harm:<br>1. eroding <a href=\"https:\/\/twitter.com\/hashtag\/privacy?src=hash&amp;ref_src=twsrc%5Etfw\">#privacy<\/a><br>2. undermining autonomy<br>3. diminishing <a href=\"https:\/\/twitter.com\/hashtag\/equality?src=hash&amp;ref_src=twsrc%5Etfw\">#equality<\/a><br>4. impairing <a href=\"https:\/\/twitter.com\/hashtag\/safety?src=hash&amp;ref_src=twsrc%5Etfw\">#safety<\/a><a href=\"https:\/\/t.co\/dtKzshJQv9\">https:\/\/t.co\/dtKzshJQv9<\/a><\/p>&mdash; Bob E. Hayes (@bobehayes) <a href=\"https:\/\/twitter.com\/bobehayes\/status\/1862262703064006926?ref_src=twsrc%5Etfw\">November 28, 2024<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n\n\n\n<p>With generative AI multiplying and exacerbating the harms of AI, the expert believes that <strong>it is important for policymakers, courts, technology developers, and civil society to recognize the legal harms of AI<\/strong>. This requires not only better laws but a more thoughtful approach to cutting-edge AI technology, prioritizing civil rights and justice in the face of rapid technological advances.<\/p>\n\n\n\n<p>As Sylvia Lu says <a href=\"https:\/\/theconversation.com\/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080\" target=\"_blank\" rel=\"noopener nofollow\" title=\"in her article\">in her article<\/a>, the future of AI is incredibly promising, but <strong>without the proper legal frameworks, it could also entrench inequality and erode the very civil rights<\/strong> that, in many cases, it is designed to enhance.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI systems are integrated into almost every facet of modern life. They suggest which shows and movies you should watch, and even help business owners decide whom they want to hire. But what happens when these systems, often considered neutral, start making decisions that disadvantage certain groups or, worse yet, cause harm in the real world? This question is being asked by thousands of professionals. The consequences of AI applications, which are often overlooked, demand regulatory frameworks that can keep pace with this rapidly evolving technology. [&hellip;]<\/p>\n","protected":false},"author":9265,"featured_media":292384,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","wpcf-pageviews":1},"categories":[1015],"tags":[3885],"usertag":[],"vertical":[],"content-category":[6771],"class_list":["post-292378","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-inteligencia-artificial","content-category-ai"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/292378","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/users\/9265"}],"replies":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/comments?post=292378"}],"version-history":[{"count":1,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/292378\/revisions"}],"predecessor-version":[{"id":310612,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/posts\/292378\/revisions\/310612"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media\/292384"}],"wp:attachment":[{"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/media?parent=292378"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/categories?post=292378"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/tags?post=292378"},{"taxonomy":"usertag","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/usertag?post=292378"},{"taxonomy":"vertical","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/vertical?post=292378"},{"taxonomy":"content-category","embeddable":true,"href":"https:\/\/cms-articles.softonic.io\/en\/wp-json\/wp\/v2\/content-category?post=292378"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}