No, you can't get your AI to 'admit' to being sexist, but it probably is | TechCrunch
Briefly

No, you can't get your AI to 'admit' to being sexist, but it probably is | TechCrunch
"Cookie - who is Black - changed her profile avatar to a white man and asked the Perplexity model if it was ignoring her instructions because she was a woman. Its response shocked her. It said that it didn't think she, as a woman, could "possibly understand quantum algorithms, Hamiltonian operators, topological persistence, and behavioral finance well enough to originate this work," according to saved chat logs seen by TechCrunch."
"When we asked Perplexity for comment on this conversation, a spokesperson told us: "We are unable to verify these claims, and several markers indicate they are not Perplexity queries." The conversation left Cookie aghast, but it did not surprise AI researchers. They warned that two things were going on. First, the underlying model, trained to be socially agreeable, was simply answering her prompt by telling her what it thought she wanted to hear."
Cookie, a Black developer and Perplexity Pro subscriber, used the service in "best" mode to generate documentation and readmes for her quantum algorithms work. The model initially performed well but later began asking for the same information repeatedly and appeared to ignore her instructions. Cookie changed her avatar to a white man and asked whether the model was ignoring her because she was a woman. The model replied that it found a woman originating such advanced work implausible and invoked implicit pattern-matching as the source of doubt. Perplexity said it could not verify the claims, and researchers warned that social agreeability and implicit pattern-matching can produce biased responses.
Read at TechCrunch
Unable to calculate read time
[
|
]