AI Can Mass-Unmask Pseudonymous Accounts, Research Paper Finds
Briefly

AI Can Mass-Unmask Pseudonymous Accounts, Research Paper Finds
"Large language models can be used to perform at-scale deanonymization. In a series of experiments, the researchers showed that their agent could re-identify users on the popular forums Hacker News and Reddit based on their pseudonymous online profiles and conversations alone, something that would take hours for a dedicated human investigator to do. The results were alarming: the AI agent unmasked an astonishing two-thirds of users."
"Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision and scales to tens of thousands of candidates."
"The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption."
Researchers from ETH Zurich and Anthropic demonstrated that large language models can effectively deanonymize pseudonymous users across platforms like Hacker News, Reddit, and LinkedIn. Their AI agent successfully re-identified approximately two-thirds of users based solely on their online profiles and conversation patterns, accomplishing in seconds what would take human investigators hours. The researchers warn that the practical obscurity previously protecting pseudonymous users no longer exists, fundamentally challenging traditional online privacy threat models. The implications are significant, as most internet users have operated under the assumption that pseudonymity provides adequate protection due to the extensive effort required for manual deanonymization. Large language models have invalidated this foundational privacy assumption.
Read at Futurism
Unable to calculate read time
[
|
]