A study reveals that powerful language models can identify anonymous accounts

A recent study from ETH Zurich has revealed that Large Language Models (LLMs) have the ability to identify anonymous accounts on digital platforms alarmingly effectively. According to the research, LLMs can conduct investigations that would normally take hours in just minutes, successfully identifying 9 out of 125 anonymous profiles correctly when provided with a summary of their biographies. This suggests that LLMs are radically changing the landscape of online anonymity. Privacy concerns The results of the study indicate that these models can carry out deanonymization attacks on […]

A recent study by ETH Zurich has revealed that Large Language Models (LLMs) have the ability to identify anonymous accounts on digital platforms alarmingly effectively.

According to the research, LLMs can conduct investigations that would normally take hours in just minutes, successfully identifying 9 out of 125 anonymous profiles correctly when provided with a summary of their biographies. This suggests that LLMs are radically changing the landscape of online anonymity.

Privacy concerns

The results of the study indicate that these models can carry out large-scale deanonymization attacks, raising serious concerns about user privacy on the internet.Researchers warned that the ability of LLMs to correlate dispersed information across different platforms puts individuals who rely on anonymity at risk, including dissidents, human rights activists, and journalists in repressive countries.

One of the authors of the study stated that AI tools have drastically simplified the identification of pseudo-anonymous individuals online, which represents a significant change in operational security. He emphasized that this advancement can be particularly useful for security forces and intelligence agencies, which can now conduct investigations at a lower cost and more quickly.

Although the study was not conducted on users with a high level of privacy, the findings highlight the fragility of pseudo-anonymity in the era of generative AI. Experts, such as Jacob Hoffman-Andrews from the Electronic Frontier Foundation, argue that even the publication of innocuous personal information can facilitate account correlation by LLMs, making online privacy preservation increasingly difficult.

Ultimately, the study suggests that the advancement of deanonymization technology could radically transform the way privacy is managed on the internet, affecting a wide range of users who value their ability to maintain anonymity.