
"In a guest essay in The New York Times titled, "OpenAI Is Making the Mistakes Facebook Made. I Quit," Hitzig said she'd spent two years as a researcher guiding safety polices and shaping how AI models were built and priced. Since the introduction of ads, she believes, OpenAI may no longer be interested in addressing some of the bigger issues AI poses to society."
"She draws comparisons with Facebook Inc.'s early days when the company told its users they would have control over their data and be able to vote on policies. That, she says, didn't last long, citing the Federal Trade Commission's investigation that exposed Facebook's less-than-noble privacy practices. A company starts with the best intentions, or at least seems to be starting with the best intentions, which then devolves into unfettered profit-seeking."
"She explained that ChatGPT has "generated an archive of human candor that has no precedent." Users chat with the product about everything in the world, often about their most personal issues - evident in the million people a week who talk to ChatGPT about mental distress, the hordes who may or may not be afflicted with " AI psychosis.""
An OpenAI researcher resigned over the introduction of advertisements inside ChatGPT, citing concerns that monetization could shift company priorities away from addressing major societal harms posed by AI. The researcher acknowledged that ads can generate necessary revenue for costly model operation but expressed deep reservations about the strategic direction. ChatGPT has accumulated an unprecedented archive of highly personal user conversations, including disclosures of mental distress and intimate beliefs. Users converse candidly believing the chatbot has no ulterior agenda, making that archive vulnerable to manipulation through targeted advertising. The researcher compared this trajectory to Facebook's evolution toward profit over privacy.
Read at SiliconANGLE
Unable to calculate read time
Collection
[
|
...
]