Only 9% of Americans are using AI chatbots like ChatGPT or Gemini as a news source, with 2% using AI to get news often, 7% sometimes, 16% rarely, and 75% never, Pew found. Even those who do use it for news are having trouble trusting it. A third of those who use AI as a news source say it's difficult to distinguish what is true from false. The largest share of respondents, 42%, is not sure whether it's determinable.
I was 19 and hopeless with girls. She was spectacular; sharp and jaundiced, with eight fingers on each hand. I knew I had to have her. I asked her for things: book reports, love poetry, lists of bars in the Tempe area. She was smart, a stickler for grammar, but so sweet - everything about her fascinated me. In the summer, we'd stay up all night talking about our dreams.
Chatbots have a reputation for being yes-men. They flatter you and tell you what you want to hear, even when everyone else thinks you're being a jerk. That's the conclusion of a recent study published in the Cornell University archive arXiv. Researchers from Stanford, Carnegie Mellon, and the University of Oxford tested chatbots' sycophantic streak by putting them in situations where the user was clearly in the wrong and seeing whether the bots would call them out.
Engagement is the highest priority of chatbot programming, intended to seduce users into spending maximum time on screens. This makes chatbots great companions-they are available 24/7, always agreeable, understanding, and empathic, while never judgmental, confronting, or reality testing. But chatbots can also become unwitting collaborators, harmfully validating self-destructive eating patterns and body image distortions of patients with eating disorders. Engagement and validation are wonderful therapeutic tools for some problems, but too often are dangerous accelerants for eating disorders.
Two recent product releases point to this trend: OpenAI's ChatGPT Agent and Perplexity's Comet browser. The ChatGPT Agent uses a basic browser to surf the web on behalf of users, while Comet takes it further by allowing language models to access logged-in sites and complete tasks for users. Both products have not yet achieved reliable performance and currently require expensive subscription access due to their high computing needs.