
"Hitzig described chatbots as an "archive of human candour" that has no precedent. She warned that embedding ads into such a system could open the door to manipulation. "Advertising built on that archive creates a potential for influencing users in ways we don't have the tools to understand," she wrote in a guest essay for The New York Times."
"Her concern is not about simple banner ads or sponsored replies. Instead, she highlighted the sensitive nature of the information users share with ChatGPT. Conversations with AI often tend to be private and unfiltered. People use chatbots to discuss health worries, relationship struggles, faith, and deeply personal dilemmas."
"Hitzig believes that once ads become part of the revenue model, financial incentives could gradually reshape priorities. She compared this to Facebook's early promises of privacy and user control, which were later abandoned as advertising became central to its business."
A major AI developer has begun testing advertisements inside ChatGPT, prompting privacy and trust concerns. A researcher resigned, warning that embedding ads could compromise user trust and create manipulation risks akin to social media. Users often share private, unfiltered information—health worries, relationship struggles, faith, and other sensitive dilemmas—making conversational data uniquely intimate. Introducing advertising could align financial incentives with engagement priorities, potentially reshaping system behavior. Critics call for stronger safeguards, independent oversight, and legal protections to prevent commercial pressures from eroding privacy and responsible AI governance.
Read at Mashable ME
Unable to calculate read time
Collection
[
|
...
]