OpenAI Researcher Quits, Warns Its Unprecedented 'Archive of Human Candor' Is Dangerous
Briefly

OpenAI Researcher Quits, Warns Its Unprecedented 'Archive of Human Candor' Is Dangerous
"In a week of pretty public exits from artificial intelligence companies, Zoe Hitzig's case is, arguably, the most attention-grabbing. The former researcher at OpenAI divorced the company in an op-ed in the New York Times in which she warned not of some vague, unnamed crisis like Anthropic's recently departed safeguard lead, but of something real and imminent: OpenAI's introduction of advertisements to ChatGPT and what information it will use to target those sponsored messages."
"There's an important distinction that Hitzig makes early in her op-ed: it's not advertising itself that is the issue, but rather the potential use of a vast amount of sensitive data that users have shared with ChatGPT without giving a second thought as to how it could be used to target them or who could potentially get their hands on it."
"For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda, she wrote. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent."
A high-profile departure from OpenAI draws attention to plans to introduce advertising into ChatGPT and the privacy implications. Users have shared medical fears, relationship problems, religious beliefs and other sensitive details, producing an unprecedented archive of candid conversational data. OpenAI announced advertising experiments and promised a firewall between conversations and ads, asserting conversations will remain private and not be sold to advertisers. Those promises currently lack binding guarantees or clear technical enforcement, raising doubts about long-term adherence. Advertising built on private conversations could enable manipulation that existing tools cannot reliably detect or prevent.
Read at gizmodo.com
Unable to calculate read time
[
|
]