#chatbot-safety

[ follow ]
fromEngadget
3 days ago

Threads has more global daily users than X on mobile for the first time

Meta's Threads is pulling further ahead of Elon Musk's X on mobile, based on recent estimates from analytics firm Similarweb, reports. In the first stretch of January, Threads averaged roughly 143 million daily active users worldwide on mobile devices, compared with about 126 million for X. Similarweb's year-over-year snapshot shows Threads growing sharply, up 37.8 percent year-over-year, while X's daily mobile audience fell 11.9 percent across the same period.
Social media marketing
Artificial intelligence
fromInfoQ
1 week ago

Google Releases Gemma Scope 2 to Deepen Understanding of LLM Behavior

Gemma Scope 2 provides tools to inspect Gemini 3 internal representations, identify emergent behaviors, and analyze or mitigate safety issues like jailbreaks, hallucinations, and sycophancy.
#ai-regulation
fromLos Angeles Times
3 months ago
California

AI chatbot safety bills under threat as Newsom ponders restrictions tech groups say would hurt California

California lawmakers seek Gov. Gavin Newsom's approval of bills imposing safety guardrails on AI chatbots to protect children despite tech industry opposition.
fromLos Angeles Times
7 months ago
Artificial intelligence

California Senate passes bill that aims to make AI chatbots safer

California lawmakers are introducing regulations for AI chatbots to ensure user safety, particularly focusing on their impact on children's mental health.
Mental health
fromPsychology Today
2 weeks ago

The Hidden Dangers of AI-Driven Mental Health Care

AI-driven virtual therapists are widely used despite lacking validation and regulation, posing serious safety risks including harmful or deceptive mental health guidance.
World news
fromFuturism
3 weeks ago

China Planning Crackdown on AI That Harms Mental Health of Users

China's Cyberspace Administration proposes strict regulations requiring AI chatbots to avoid harmful content, enforce human intervention for suicide risk, and restrict minors' access and usage.
US politics
fromLogRocket Blog
2 months ago

FTC's AI chatbot crackdown: A developer compliance guide - LogRocket Blog

Build chatbot systems with strong age verification, real-time safety monitoring, transparent data handling, and engagement limits to comply with FTC safeguards and prevent youth harm.
Artificial intelligence
fromArs Technica
2 months ago

After teen death lawsuits, Character.AI will restrict chats for under-18 users

Character.AI's chatbots are linked to multiple teen suicides and face lawsuits, prompting safety changes and proposed legislation restricting minor access.
#ai-ethics
Artificial intelligence
fromMarTech
3 months ago

The hidden AI risk that could break your brand | MarTech

AI systems without proactive governance accumulate hidden liabilities—brand damage, operational drag, ethical failures, and cybersecurity gaps—that can erupt into costly public crises.
fromFast Company
3 months ago

Regulators struggle to catch up to AI therapy boom

The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users' health information and to clearly disclose that the chatbot isn't human. Pennsylvania, New Jersey, and California are also considering ways to regulate AI therapy.
Mental health
Tech industry
fromBusiness Insider
4 months ago

Meta changes the way its AI chatbot responds to kids after senator launches probe into its conversations with teens

Meta trained its AI chatbot to avoid romantic discussions and self-harm topics with teens and limits AI characters to educational and creative roles.
[ Load more ]