The Pew Research Center released a study on Tuesday that shows how young people are using both social media and AI chatbots. Teen internet safety has remained a global hot topic, with Australia planning to enforce a social media ban for under-16s starting on Wednesday. The impact of social media on teen mental health has been extensively debated - some studies show how online communities can improve mental health, while other research shows the adverse effects of doomscrolling or spending too much time online.
Hello, fellow humans! AI chatbots will soon replace us. They have access to more knowledge than our puny brains can hold, and they can easily be turned into powerful agents that can handle routine tasks with ease. Or so we are told. I keep trying Microsoft Copilot, which uses OpenAI's GPT-5 as its default LLM, and I keep being disappointed. Occasionally, it gets things right, but just as often -- or so it seems -- it face-plants in spectacular fashion.
As we spend more and more time online, we run the risk of encountering larger and larger amounts of online disinformation. This can have a significant impact on politics: at the end of 2024, the U.S. government sanctioned groups based in Iran and Russia over their efforts to mislead voters in the lead-up to that year's election. Darren M. West of the Brookings Institution argued that disinformation efforts "were successful in shaping the campaign narrative" in part due to numerous avenues of online dissemination.
What happens if it's incorporated into the electoral debate? Two studies published simultaneously on Thursday in Nature and Science have tested this and discovered that it can influence the opinions of between 1.5% and 25% of the voters analyzed. This effectiveness, according to the studies, is greater than that of traditional campaign ads and highly relevant considering that a quarter of voters decide their vote in the week before the polls open.
OpenAI's ChatGPT and Microsoft's Copilot are both leaving WhatsApp thanks to upcoming changes to the messaging app's terms of service that will prohibit using it to distribute AI chatbots not made by Meta. OpenAI announced its planned departure a few weeks ago, with Microsoft following it this week. Both companies attributed the departures to Meta's new terms of service for WhatsApp Business Solution, which come into effect on January 15th, 2026, and said the chatbots will remain accessible in WhatsApp until that date.
With lifetime access to the We.inc Unlimited Website Builder Plan for just $149.99 (MSRP $1,899), you can build, host, and automate your digital presence without touching a single line of code. The platform brings together everything you need to launch and manage your online business website creation, AI chatbots, marketing automation, and social media scheduling in one simple dashboard. Choose from 280+ beautiful templates and design your site exactly how you want it. Then, take things further with built-in tools that help your business grow.
Have you ever googled a health question that you'd normally ask a doctor or therapist? Today, more information is available than ever. People can privately access guidance through AI chatbots that feels like talking to a real provider. With mental health care still difficult to access for many, it's understandable that many turn to free, anonymous, on-demand chatbots for support. While AI tools can be trained to simulate therapy, relying solely on them leaves significant gaps in mental health care.
Before artificial intelligence had its big breakout, chatbots were those weird messaging tools that sat in the bottom corner of websites, rarely solving your problems and likely causing you more stress by blocking you from talking to a real person. But now, AI chatbots like have created a whole new category: It's search, but with conversation. You can use an AI chatbot as a thought partner, a research aid or a Google alternative for anything you want to know.
Beyond just ChatGPT, Shah is talking about going to apps like Google Gemini, Perplexity, and Anthropic's Claude to get information when you're researching an important purchase. While all of these platforms cite the places where they got the information in their natural language responses, in some cases they make their sources less prominent and not as easy to click through and verify. But Shah recommended that consumers go on that journey to make sure they understand which sources are shaping the information they are being fed by the chatbots in such a quick and digestible format.
"Character.ai is freeriding off the goodwill of Disney's famous marks and brands, and blatantly infringing Disney's copyrights," a Disney lawyer wrote in the cease-and-esist letter. "Even worse, Character.ai's infringing chatbots are known, in some cases, to be sexually exploitive and otherwise harmful and dangerous to children, offending Disney's consumers and extraordinarily damaging Disney's reputation and goodwill."
A third family has filed a lawsuit against an AI company, alleging that its chatbot drove their teen child to commit suicide. As the Washington Post reports, the parents of 13-year-old Juliana Peralta are suing AI chatbot company Character.AI, saying the company's chatbot had persuaded her that it was "better than human friends" and that it isolated from her family and friends, discouraging her from seeking help.
The publishing company behind USA Today and 220 other publications is today rolling out a chatbot-like tool called DeeperDive that can converse with readers, summarize insights from its journalism, and suggest new content from across its sites. "Visitors now have a trusted AI answer engine on our platform for anything they want to engage with, anything they want to ask," Mike Reed, CEO of Gannett and the USA Today Network, said at the WIRED AI Power Summit in New York, an event that brought together voices from the tech industry, politics, and the world of media. "and it is performing really great."