AI systems are integrated into almost every facet of modern life. They suggest which shows and movies you should watch, even help employers decide who they want to hire.
Subscribe to the Softonic newsletter and get the latest in tech, gaming, entertainment and deals right in your inbox.
Subscribe (it's FREE) ►But, what happens when these systems, often considered neutral, start making decisions that disadvantage certain groups or, worse yet, cause harm in the real world? This question is asked by thousands of professionals.
The often overlooked consequences of AI applications demand regulatory frameworks that can keep pace with this rapidly evolving technology. And an expert in this field, Sylvia Lu, has studied the intersection between law and technology to outline a legal framework to do just that.
Regulation that never moves as fast as innovation
Despite these growing dangers, legal frameworks around the world have struggled to keep up. In the United States, a regulatory approach that emphasizes innovation has made it difficult to impose strict standards on how these systems are used in multiple contexts.
Courts and regulatory bodies are accustomed to dealing with concrete damages, but algorithmic damages are often more subtle, cumulative, and difficult to detect. Regulations often do not address the broader effects that AI systems can have over time.
The algorithms of social networks, for example, can gradually erode users’ mental health, but since these damages accumulate slowly, they are difficult to address within the limits of current legal norms.
Create true accountability
Categorizing the types of algorithmic damages outlines the legal boundaries of AI regulation and presents possible legal reforms to close this liability gap.
The changes that the expert believes would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality, and security, before and after its deployment.
For example, companies that use facial recognition systems would have to assess the impact of these systems throughout their lifecycle.
Another useful change would be the strengthening of individual rights regarding the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt-in.
For example, require an opt-in regime for data processing by companies using facial recognition systems and allow users to opt-out at any time.
Finally, the expert suggests requiring companies to disclose the use of AI technology and its anticipated harms. For example, this could include notifying customers about the use of facial recognition systems and the anticipated harms in the areas described in the typology.
As the use of AI systems in critical social functions becomes widespread (from healthcare to education and employment), the need to regulate the harms they can cause becomes more urgent. Without intervention, it is likely that these invisible harms will continue to accumulate, affecting almost everyone and disproportionately impacting the most vulnerable.
With generative AI multiplying and exacerbating the harms of AI, the expert believes that it is important for policymakers, courts, technology developers, and civil society to recognize the legal harms of AI. This requires not only better laws but a more thoughtful approach to cutting-edge AI technology, prioritizing civil rights and justice in the face of rapid technological advances.
As Sylvia Lu says in her article, the future of AI is incredibly promising, but without the proper legal frameworks, it could also entrench inequality and erode the very civil rights that, in many cases, it is designed to enhance.