"A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe," surveillance researcher Chris Gilliard told The Washington Post.
"malicious actors will leverage AI's mass scalability and its propensity to convincingly ape human actions online to flood the web with non-human content. Chief among their concerns: AI's ability to spit out 'human-like content that expresses human-like experiences or points of view.'"
The researchers proposed the PHC system because they're concerned about AI's ability to create 'human-like content' and to replicate 'human-like actions across the Internet' such as 'solving CAPTCHAs when challenged.'
The idea of PHCs is attractive to researchers as an organization that offers digital services can issue a unique credential to each human end user for verification without revealing personal data.
Collection
[
|
...
]