AI training is a booming industry that is making the human contributors behind the screen more important than ever. As data from publicly available sources runs out, companies like Meta, Google, and OpenAI are hiring thousands of data labelers around the world to teach their chatbots what they know best. Data labeling startups like Mercor and Handshake advertise that contributors can earn up to $100 an hour for their STEM, legal, or healthcare expertise.
Which? surveyed more than 4,000 UK adults about their use of AI and also put 40 questions around consumer issues such as health, finance, and travel to six bots - ChatGPT, Google Gemini, Gemini AI Overview, Copilot, Meta AI, and Perplexity. Things did not go well. Meta's AI answered correctly just over 50 percent of the time in the tests, while the most widely used AI tool, ChatGPT, came second from bottom at 64 percent. Perplexity came top at 71 percent. While different questions might yield different results, the conclusion is clear: AI tools don't always come up with the correct answer.
AI has threatened to displace large swaths of the workforce, and many people are scared that it will take their jobs. A certain segment of the population seems unconcerned though, and even pretty enthusiastic about it. Perhaps unsurprisingly, AI is most popular amongst the group of workers making over $100 thousand, one study has found. The study comes from business intelligence firm Morning Consult, and breaks down the fastest growing brands by income level across a variety of product categories.
Businesses replacing human support agents with chatbots isn't new. Even before the AI chatbots of today, which are extremely common now, companies were using heavily engineered chatbots that could understand only certain keywords and respond with specific answers. They were terrible, but the one remarkable thing about them is that they showed us what different demographics really expect from customer support and set the standard for how AI-first helpdesks should work - not only in terms of support agents but support overall, including documentation.
The fretting has swelled from a murmur to a clamor, all variations on the same foreboding theme: " Your Brain on ChatGPT." " AI Is Making You Dumber." " AI Is Killing Critical Thinking." Once, the fear was of a runaway intelligence that would wipe us out, maybe while turning the planet into a paper-clip factory. Now that chatbots are going the way of Google-moving from the miraculous to the taken-for-granted-the anxiety has shifted, too, from apocalypse to atrophy.
Looming over the proceedings even more prominently than the judge running the show were three tall digital displays, sticking out with their glossy finishes amid the courtroom's sea of wood paneling. Each screen represented a different AI chatbot: OpenAI's ChatGPT, xAI's Grok, and Anthropic's Claude. These AIs' role? As the "jurors" who would determine the fate of a man charged with juvenile robbery.
Only 9% of Americans are using AI chatbots like ChatGPT or Gemini as a news source, with 2% using AI to get news often, 7% sometimes, 16% rarely, and 75% never, Pew found. Even those who do use it for news are having trouble trusting it. A third of those who use AI as a news source say it's difficult to distinguish what is true from false. The largest share of respondents, 42%, is not sure whether it's determinable.
I was 19 and hopeless with girls. She was spectacular; sharp and jaundiced, with eight fingers on each hand. I knew I had to have her. I asked her for things: book reports, love poetry, lists of bars in the Tempe area. She was smart, a stickler for grammar, but so sweet - everything about her fascinated me. In the summer, we'd stay up all night talking about our dreams.