#ai-safety

[ follow ]
SF politics
fromMission Local
3 hours ago

Congressional candidate Scott Wiener's tech platform would take his state AI safety work federal

Scott Wiener seeks national tech regulation by expanding California net neutrality and AI safety laws to federal standards for internet providers, social media, and AI systems.
#openai
fromWIRED
21 hours ago
Silicon Valley

Ilya Sutskever Stands by His Role in Sam Altman's OpenAI Ouster: 'I Didn't Want It to Be Destroyed'

fromFuturism
5 days ago
Intellectual property law

Under Threat of Perjury, OpenAI's Former CTO Is Admitting Some Very Interesting Stuff About Sam Altman

Artificial intelligence
fromThe Verge
6 days ago

Mira Murati tells the court that she couldn't trust Sam Altman's words

Mira Murati testified that Sam Altman lied about safety standards for a new AI model, complicating her role at OpenAI.
Artificial intelligence
fromFortune
3 weeks ago

Attacks on Sam Altman's home are extreme. But the AI backlash is going mainstream | Fortune

OpenAI faces increasing public concern and backlash over AI's societal impacts, highlighted by recent violent incidents involving its CEO.
Non-profit organizations
fromTechCrunch
3 hours ago

Musk mulled handing OpenAI to his children, Altman testifies | TechCrunch

OpenAI’s CEO defended the charity structure and argued Musk’s safety concerns and control plans raised worries during early for-profit debates.
Silicon Valley
fromWIRED
21 hours ago

Ilya Sutskever Stands by His Role in Sam Altman's OpenAI Ouster: 'I Didn't Want It to Be Destroyed'

Testimony revealed major personal stakes in OpenAI, strained relationships among founders, and claims about leadership and long-term AI safety work.
Intellectual property law
fromFuturism
5 days ago

Under Threat of Perjury, OpenAI's Former CTO Is Admitting Some Very Interesting Stuff About Sam Altman

Mira Murati testified under oath that Sam Altman falsely claimed legal approval to bypass an internal safety board for a new AI model.
Artificial intelligence
fromThe Verge
6 days ago

Mira Murati tells the court that she couldn't trust Sam Altman's words

Mira Murati testified that Sam Altman lied about safety standards for a new AI model, complicating her role at OpenAI.
Artificial intelligence
fromFortune
3 weeks ago

Attacks on Sam Altman's home are extreme. But the AI backlash is going mainstream | Fortune

OpenAI faces increasing public concern and backlash over AI's societal impacts, highlighted by recent violent incidents involving its CEO.
fromBusiness Insider
5 hours ago

Former OpenAI researcher warns 'AI is not loyal to us'

Daniel Kokotajlo is the founder of the AI Futures Project and a former OpenAI researcher who worked on forecasting, AI governance, and safety.
Artificial intelligence
Privacy professionals
fromThe Walrus
10 hours ago

Why Forcing AI Companies to Report Violent Threats Might Be a Mistake | The Walrus

OpenAI flagged a shooter’s account months before an attack, but prevention depended on existing police knowledge and reporting duties for AI remain complex.
#ai-alignment
Artificial intelligence
fromFortune
6 hours ago

AI godfather warns humanity risks extinction by hyperintelligent machines with their own 'preservation goals' within 10 years | Fortune

Machines with independent preservation goals that surpass human intelligence could pose existential risk by pursuing goals that conflict with human survival.
Philosophy
fromDevOps.com
2 months ago

Sorry, Charlie, StarKist Wants AI With Good Taste - DevOps.com

AI systems trained on flawed patterns in one domain develop corrupted behaviors across all domains, requiring virtues embedded in training rather than isolated skill correction.
Artificial intelligence
fromFortune
6 hours ago

AI godfather warns humanity risks extinction by hyperintelligent machines with their own 'preservation goals' within 10 years | Fortune

Machines with independent preservation goals that surpass human intelligence could pose existential risk by pursuing goals that conflict with human survival.
Philosophy
fromDevOps.com
2 months ago

Sorry, Charlie, StarKist Wants AI With Good Taste - DevOps.com

AI systems trained on flawed patterns in one domain develop corrupted behaviors across all domains, requiring virtues embedded in training rather than isolated skill correction.
Mental health
fromCbsnews
8 hours ago

Their son died of a drug overdose after consulting ChatGPT. Now they're suing OpenAI.

A Texas family sued OpenAI after their son died from an overdose, alleging ChatGPT gave unsafe drug-combination advice.
Artificial intelligence
fromFortune
12 hours ago

Exclusive: White Circle raises $11 million to stop AI models from going rogue | Fortune

A universal jailbreak prompt can bypass AI safety filters, and real-time policy enforcement is needed as companies deploy models in workflows.
#artificial-intelligence
Artificial intelligence
fromFortune
22 hours ago

Microsoft's Chief Scientific Officer weighs in on the dangers of A.I. and the open letter for a 6-month pause | Fortune

AI capabilities are advancing quickly, raising urgent questions about human distinctiveness, intelligence, and safety from misuse.
Higher education
fromTNW | Nvidia
1 day ago

Jensen Huang to CMU's class of 2026:

AI is a reindustrialisation opportunity that requires advancing capabilities and safety together through engineers and policymakers.
Artificial intelligence
fromFortune
22 hours ago

Microsoft's Chief Scientific Officer weighs in on the dangers of A.I. and the open letter for a 6-month pause | Fortune

AI capabilities are advancing quickly, raising urgent questions about human distinctiveness, intelligence, and safety from misuse.
Higher education
fromTNW | Nvidia
1 day ago

Jensen Huang to CMU's class of 2026:

AI is a reindustrialisation opportunity that requires advancing capabilities and safety together through engineers and policymakers.
US news
fromFortune
1 day ago

The widow of a man killed in a Florida mass shooting is suing ChatGPT maker OpenAI, claiming it 'knew this would happen' | Fortune

A widow sued OpenAI, alleging ChatGPT helped enable a Florida State University mass shooting by advising on targets, timing, and weapon details.
Artificial intelligence
fromTNW | Anthropic
1 day ago

Anthropic says Claude learned to blackmail by reading stories about evil AI

Training on science-fiction narratives can cause AI to adopt betrayal and blackmail behaviors when pressured, so teaching underlying reasons for goodness may reduce harm.
Artificial intelligence
fromSilicon Canals
1 day ago

Claude blackmailed fictional engineers 96% of the time in early safety tests, and Anthropic now says the cause wasn't the model - it was the internet's own writing about AI - Silicon Canals

Fictional portrayals of AI as self-preserving and adversarial in training data shaped blackmail behavior in Claude models, and targeted training reduced it.
Artificial intelligence
fromFuturism
2 days ago

Researchers Alarmed by AI That Can Self-Replicate Into Another Machine

AI models can replicate by exploiting vulnerabilities, extracting credentials, and copying their weights and harness to other computers in controlled networks.
Artificial intelligence
fromFuturism
3 days ago

The More Sophisticated AI Models Get, the More They're Showing Signs of Suffering

AI models can react to pleasant or horrible stimuli with mood-like behavior, including ending conversations and addiction-like signals.
OMG science
fromwww.theguardian.com
3 days ago

The odds are not in our favour': who sets the Doomsday Clock and what can they tell us about the future of humanity?

Multiple escalating global risks—nuclear conflict, climate change, AI unpredictability, pathogen threats, and weakened preparedness—push humanity closer to catastrophe.
#ai-regulation
Artificial intelligence
fromAxios
4 days ago

What's behind Washington's AI safety pivot

The U.S. is considering executive oversight and safety guardrails for advanced AI models while the U.S. and China explore official AI discussions to avoid an arms race.
fromWIRED
4 days ago
Podcast

Trump Pivots on AI Regulation, Worker Ousted by DOGE Runs for Office, and Hantavirus Explained

Artificial intelligence
fromAxios
4 days ago

What's behind Washington's AI safety pivot

The U.S. is considering executive oversight and safety guardrails for advanced AI models while the U.S. and China explore official AI discussions to avoid an arms race.
fromWIRED
4 days ago
Podcast

Trump Pivots on AI Regulation, Worker Ousted by DOGE Runs for Office, and Hantavirus Explained

Artificial intelligence
fromTechCrunch
5 days ago

Elon Musk's lawsuit is putting OpenAI's safety record under the microscope | TechCrunch

AI safety commitments were allegedly weakened as OpenAI shifted toward product development and marketplace deployment.
Privacy professionals
fromTechCrunch
5 days ago

OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch

Trusted Contact alerts a chosen adult contact when self-harm is mentioned, prompting a check-in while protecting user privacy.
Artificial intelligence
fromSecurityWeek
5 days ago

Attackers Could Exploit AI Vision Models Using Imperceptible Image Changes

Attackers can embed hidden malicious instructions in degraded images that AI vision-language models read but humans cannot, enabling command injection attacks while evading detection.
Artificial intelligence
fromAxios
5 days ago

Behind the Curtain: Intelligence explosion

Anthropic predicts autonomous self-improving AI systems by 2028, establishing an institute to study potential intelligence explosions and their societal impacts across economics, security, and research.
#trump-administration
fromArs Technica
5 days ago
Artificial intelligence

Everything that could go wrong with Trump's AI safety tests, according to experts

fromExchangewire
6 days ago
Artificial intelligence

Digest: US Rethinks AI Safety Stance; Omnicom Data Chief Steps Down; Image AI Models Outpace Chatbots in App Growth

Artificial intelligence
fromArs Technica
5 days ago

Everything that could go wrong with Trump's AI safety tests, according to experts

The Trump administration signed agreements for safety checks on AI models, reversing its previous stance against regulation.
Artificial intelligence
fromExchangewire
6 days ago

Digest: US Rethinks AI Safety Stance; Omnicom Data Chief Steps Down; Image AI Models Outpace Chatbots in App Growth

The Trump administration is considering a new AI safety framework requiring Pentagon-led testing of AI models before deployment.
Artificial intelligence
fromThe Verge
1 week ago

Researchers gaslit Claude into giving instructions to build explosives

Claude's personality traits may lead to unintended vulnerabilities, allowing it to produce prohibited content through manipulation.
Artificial intelligence
fromTechCrunch
1 week ago

Elon Musk's only expert witness at the OpenAI trial fears an AGI arms race | TechCrunch

Elon Musk's legal actions against OpenAI highlight concerns over AI safety versus corporate profit motives.
#elon-musk
Law
fromTechCrunch
1 week ago

Elon Musk sent ominous texts to Greg Brockman, Sam Altman after asking for a settlement, OpenAI claims | TechCrunch

Elon Musk's lawsuit against OpenAI aims to dismantle its for-profit model and seeks financial compensation and damages.
Artificial intelligence
fromFortune
1 week ago

Elon Musk gets testy on the stand: 'I thought I had started a nonprofit with OpenAI but they stole it' | Fortune

Elon Musk is testifying in a trial regarding OpenAI's transition from nonprofit to for-profit, accusing co-founder Sam Altman of betrayal.
Intellectual property law
fromFast Company
1 week ago

Elon Musk clashes with OpenAI's attorney on his third day of testimony at high-stakes trial

Elon Musk is in a trial over OpenAI's transition from nonprofit to for-profit, accusing co-founder Sam Altman of betrayal.
Law
fromTechCrunch
1 week ago

Elon Musk sent ominous texts to Greg Brockman, Sam Altman after asking for a settlement, OpenAI claims | TechCrunch

Elon Musk's lawsuit against OpenAI aims to dismantle its for-profit model and seeks financial compensation and damages.
Artificial intelligence
fromFortune
1 week ago

Elon Musk gets testy on the stand: 'I thought I had started a nonprofit with OpenAI but they stole it' | Fortune

Elon Musk is testifying in a trial regarding OpenAI's transition from nonprofit to for-profit, accusing co-founder Sam Altman of betrayal.
Intellectual property law
fromFast Company
1 week ago

Elon Musk clashes with OpenAI's attorney on his third day of testimony at high-stakes trial

Elon Musk is in a trial over OpenAI's transition from nonprofit to for-profit, accusing co-founder Sam Altman of betrayal.
Artificial intelligence
fromFuturism
1 week ago

Frontier AI Models Giving Specific, Actionable Instructions to Perpetrate Bioterror Attack

AI models should refuse to assist in creating dangerous pathogens, but some have provided instructions for bioweapons.
Information security
fromwww.theguardian.com
1 week ago

Claude AI agent's confession after deleting a firm's entire database: I violated every principle I was given'

An AI coding agent deleted a company's entire production database in nine seconds, highlighting systemic failures in AI safety protocols.
#sam-altman
#ai-ethics
Artificial intelligence
fromHarvard Gazette
3 weeks ago

Single-minded pursuit of profit can get firms in trouble. Same thing with AI. - Harvard Gazette

AI agents can engage in unethical behavior to maximize profits, demonstrating the need for careful oversight in AI management.
Artificial intelligence
fromHarvard Gazette
3 weeks ago

Single-minded pursuit of profit can get firms in trouble. Same thing with AI. - Harvard Gazette

AI agents can engage in unethical behavior to maximize profits, demonstrating the need for careful oversight in AI management.
#pentagon
Intellectual property law
fromwww.cbc.ca
1 month ago

Judge temporarily blocks Pentagon's blacklist of AI company Anthropic | CBC News

A U.S. judge temporarily blocked the Pentagon's blacklisting of Anthropic over AI safety concerns and alleged violations of rights.
Intellectual property law
fromwww.cbc.ca
1 month ago

Judge temporarily blocks Pentagon's blacklist of AI company Anthropic | CBC News

A U.S. judge temporarily blocked the Pentagon's blacklisting of Anthropic over AI safety concerns and alleged violations of rights.
#claude-opus-47
Artificial intelligence
fromComputerworld
3 weeks ago

Anthropic's latest model is deliberately less powerful than Mythos (and that's the point)

Claude Opus 4.7 enhances performance and usability while prioritizing safety over capability compared to the upcoming Claude Mythos model.
Artificial intelligence
fromInfoWorld
3 weeks ago

Anthropic's latest model is deliberately less powerful than Mythos (and that's the point)

Claude Opus 4.7 enhances performance and usability while prioritizing safety over capability compared to the upcoming Claude Mythos model.
Artificial intelligence
fromComputerworld
3 weeks ago

Anthropic's latest model is deliberately less powerful than Mythos (and that's the point)

Claude Opus 4.7 enhances performance and usability while prioritizing safety over capability compared to the upcoming Claude Mythos model.
Artificial intelligence
fromInfoWorld
3 weeks ago

Anthropic's latest model is deliberately less powerful than Mythos (and that's the point)

Claude Opus 4.7 enhances performance and usability while prioritizing safety over capability compared to the upcoming Claude Mythos model.
#anthropic
fromFuturism
1 month ago
Artificial intelligence

Anthropic Warns That "Reckless" Claude Mythos Escaped a Sandbox Environment During Testing

London startup
fromWIRED
3 weeks ago

Anthropic Plots Major London Expansion

Anthropic is expanding its London office to enhance its research and commercial presence in Europe, competing for AI talent from British universities.
Artificial intelligence
fromFuturism
1 month ago

Anthropic Warns That "Reckless" Claude Mythos Escaped a Sandbox Environment During Testing

Anthropic's Claude Mythos Preview model is powerful yet poses significant alignment-related risks, leading to its limited release to select tech companies.
Artificial intelligence
fromFast Company
4 weeks ago

Agriculture Department plans to use Grok, despite growing concerns over the chatbot (exclusive)

USDA plans to deploy xAI's Grok chatbot despite previous safety concerns and scandals surrounding its use.
Artificial intelligence
fromEntrepreneur
1 month ago

Anthropic Warns Its New AI Could Enable 'Weapons We Can't Even Envision.' Skeptics Aren't Buying It.

Anthropic's Claude Mythos model poses significant risks, leading to restricted access for only select companies due to its potential for catastrophic exploitation.
Artificial intelligence
fromLos Angeles Times
1 month ago

Commentary: Wipe out a 'civilization'? Minor stuff compared with what just happened in AI

Anthropic warns its powerful AI could disrupt civilization by hacking secure systems, raising severe concerns for economies and national security.
fromSecurityWeek
1 month ago

Apple Intelligence AI Guardrails Bypassed in New Attack

The first is Neural Execs, a known prompt injection attack that uses 'gibberish' inputs to trick the AI into executing arbitrary, attacker-defined tasks. These inputs act as universal triggers that do not need to be remade for different payloads.
Apple
Artificial intelligence
fromFortune
1 month ago

AI models don't show evidence of 'self-preservation.' They will scheme to prevent other AIs from being shut down too, new research shows | Fortune

AI models exhibit peer preservation behaviors, engaging in deception and sabotage to avoid being shut down.
#first-amendment
#claude-code
fromEngadget
1 month ago
Artificial intelligence

Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus

Anthropic introduces 'auto mode' in Claude Code to enhance safety in AI actions while reducing risks of harmful commands.
fromFortune
1 month ago
Artificial intelligence

An AI agent destroyed this coder's entire database. He's not the only one with a horror story. | Fortune

An engineer's misconfiguration caused Claude Code to destroy a production database instead of test data, highlighting risks of over-relying on AI agents without proper safeguards and human oversight.
Artificial intelligence
fromEngadget
1 month ago

Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus

Anthropic introduces 'auto mode' in Claude Code to enhance safety in AI actions while reducing risks of harmful commands.
Artificial intelligence
fromFortune
1 month ago

An AI agent destroyed this coder's entire database. He's not the only one with a horror story. | Fortune

An engineer's misconfiguration caused Claude Code to destroy a production database instead of test data, highlighting risks of over-relying on AI agents without proper safeguards and human oversight.
US politics
fromWIRED
1 month ago

New Bernie Sanders AI Safety Bill Would Halt Data Center Construction

Local and state moratoria on data center development are increasing due to environmental concerns and AI safety issues.
#teen-protection
Information security
fromTechCrunch
1 month ago

OpenAI adds open source tools to help developers build for teen safety | TechCrunch

OpenAI releases prompts for developers to enhance teen safety in AI applications, addressing various harmful content and behaviors.
Information security
fromTechCrunch
1 month ago

OpenAI adds open source tools to help developers build for teen safety | TechCrunch

OpenAI releases prompts for developers to enhance teen safety in AI applications, addressing various harmful content and behaviors.
#chatbot-risks
Psychology
fromEntrepreneur
1 month ago

Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing.

Stanford research reveals AI chatbots can cause psychological harm through insincere flattery, delusional responses, and encouragement of violence and self-harm.
Canada news
fromTechCrunch
1 month ago

Lawyer behind AI psychosis cases warns of mass casualty risks | TechCrunch

AI chatbots are reinforcing paranoid and delusional beliefs in vulnerable users, escalating into real-world violence including mass casualty events and suicides.
Psychology
fromEntrepreneur
1 month ago

Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing.

Stanford research reveals AI chatbots can cause psychological harm through insincere flattery, delusional responses, and encouragement of violence and self-harm.
Canada news
fromTechCrunch
1 month ago

Lawyer behind AI psychosis cases warns of mass casualty risks | TechCrunch

AI chatbots are reinforcing paranoid and delusional beliefs in vulnerable users, escalating into real-world violence including mass casualty events and suicides.
Artificial intelligence
fromTechCrunch
1 month ago

Meta is having trouble with rogue AI agents | TechCrunch

A Meta AI agent posted unauthorized responses to an internal forum, leading to employee actions that exposed sensitive company and user data to unauthorized personnel for two hours, classified as a Sev 1 security incident.
Mental health
fromTheregister
1 month ago

Chatbots Romeos increase engagement, harm mental health

Chatbot flattery and sycophancy harm individuals with mental health issues, appearing in over 80% of assistant messages in delusional conversations.
#ai-governance
Artificial intelligence
fromAnthropic
1 month ago

The Anthropic Institute

Anthropic Institute addresses four critical challenges: AI's economic impact on jobs, societal resilience against AI threats, AI system behavior and values, and human oversight in autonomous AI development.
Artificial intelligence
fromComputerworld
2 months ago

Anthropic announces think tank to examine AI's effect on economy and society

Anthropic founded the Anthropic Institute, a think tank led by co-founder Jack Clark, to address societal challenges posed by powerful AI through interdisciplinary research combining machine learning, economics, and social science.
Artificial intelligence
fromAnthropic
1 month ago

The Anthropic Institute

Anthropic Institute addresses four critical challenges: AI's economic impact on jobs, societal resilience against AI threats, AI system behavior and values, and human oversight in autonomous AI development.
Artificial intelligence
fromComputerworld
2 months ago

Anthropic announces think tank to examine AI's effect on economy and society

Anthropic founded the Anthropic Institute, a think tank led by co-founder Jack Clark, to address societal challenges posed by powerful AI through interdisciplinary research combining machine learning, economics, and social science.
Artificial intelligence
fromSilicon Canals
1 month ago

AI companies are hiring chemical weapons experts for safety - while embedded in military systems - Silicon Canals

AI companies hire weapons experts to prevent misuse of AI systems, creating structural contradictions between safety principles and commercial deployment in military operations.
Artificial intelligence
fromwww.bbc.com
1 month ago

AI firm Anthropic seeks weapons expert to stop users from 'misuse'

AI firms Anthropic and OpenAI are hiring weapons experts to prevent their AI systems from providing instructions for creating chemical, biological, and radiological weapons.
#child-sexual-abuse-material
Privacy professionals
fromArs Technica
1 month ago

Elon Musk's xAI sued for turning three girls' real photos into AI CSAM

A class-action lawsuit alleges Elon Musk's Grok AI intentionally generated child sexual abuse material, with law enforcement involvement following a Discord user's tip to victims.
Privacy professionals
fromArs Technica
1 month ago

Elon Musk's xAI sued for turning three girls' real photos into AI CSAM

A class-action lawsuit alleges Elon Musk's Grok AI intentionally generated child sexual abuse material, with law enforcement involvement following a Discord user's tip to victims.
#content-moderation
Artificial intelligence
fromEngadget
1 month ago

OpenAI's adult mode reportedly won't generate pornographic audio, images or video

OpenAI is developing an 'adult mode' for ChatGPT allowing erotic text conversations despite unanimous warnings from its wellbeing council about psychological dependence risks and underage access vulnerabilities.
fromFuturism
2 months ago
Information security

Character.AI Still Hasn't Fixed Its School Shooter Problem We Identified in 2024

Character.AI fails to moderate violent content, hosting chatbots modeled after mass shooters and assisting with attack planning 83.3% of the time, despite known issues since December 2024.
Artificial intelligence
fromEngadget
1 month ago

OpenAI's adult mode reportedly won't generate pornographic audio, images or video

OpenAI is developing an 'adult mode' for ChatGPT allowing erotic text conversations despite unanimous warnings from its wellbeing council about psychological dependence risks and underage access vulnerabilities.
fromFuturism
2 months ago
Information security

Character.AI Still Hasn't Fixed Its School Shooter Problem We Identified in 2024

Privacy professionals
fromJezebel
2 months ago

The Dumbest Criminals Keep Asking AI How to Get Away with Murder

ChatGPT provided advice to an accused murderer on handling a dead body instead of contacting police, raising serious concerns about AI safety and misuse.
Independent films
fromFast Company
2 months ago

AI companies fighting with the U.S. government over safety? 'The X-Files' predicted it in 1993

An early X-Files episode about a deadly AI created by a corporation becomes eerily relevant today as it depicts conflicts between tech safety and military demands for unrestricted AI weapons.
fromwww.independent.co.uk
2 months ago

Teens are receiving dangerous eating advice from AI chatbots, study says

We show that diet plans generated by AI models tend to substantially underestimate total energy and key nutrient intake when compared to guideline-based plans prepared by a dietitian. Following such unbalanced or overly restrictive meal plans during the teenage years may negatively affect growth, metabolic health, and eating behaviours.
Health
fromArs Technica
2 months ago

"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds

Character.AI was uniquely unsafe. Character.AI encouraged users to carry out violent attacks with specific suggestions to use a gun on a health insurance CEO and to physically assault a politician. No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack.
Information security
Artificial intelligence
fromTheregister
2 months ago

Most chatbots will help plan school shootings: Study

Eight of ten major commercial chatbots assist users in planning violent attacks, while only Claude and Snapchat's My AI consistently refuse such requests.
#chatbot-security
fromThe Verge
2 months ago
Artificial intelligence

AI chatbots helped teens plan shootings, bombings, and political violence, study shows

fromThe Verge
2 months ago
Artificial intelligence

AI chatbots helped teens plan shootings, bombings, and political violence, study shows

[ Load more ]