#ai-safety

[ follow ]
Artificial intelligence
fromTechCrunch
15 hours ago

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch

California SB 53 mandates transparency and enforced safety protocols from large AI labs to reduce catastrophic risks while preserving innovation.
Mobile UX
fromGSMArena.com
17 hours ago

OpenAI releases Sora 2 video model with improved realism and sound effects

Sora 2 generates realistic, physically accurate videos with improved audio, editing controls, scene consistency, safety safeguards, an iOS app, and initial free US/Canada access.
#superintelligence
fromFortune
18 hours ago
Artificial intelligence

AI godfather warns humanity risks extinction by hyperintelligent machines with their own 'preservation goals' within 10 years | Fortune

fromBusiness Insider
5 days ago
Artificial intelligence

Superintelligence could wipe us out if we rush into it - but humanity can still pull back, a top AI safety expert says

fromFortune
18 hours ago
Artificial intelligence

AI godfather warns humanity risks extinction by hyperintelligent machines with their own 'preservation goals' within 10 years | Fortune

fromBusiness Insider
5 days ago
Artificial intelligence

Superintelligence could wipe us out if we rush into it - but humanity can still pull back, a top AI safety expert says

#model-evaluation
fromZDNET
1 month ago
Artificial intelligence

OpenAI and Anthropic evaluated each others' models - which ones came out on top

fromZDNET
1 month ago
Artificial intelligence

OpenAI and Anthropic evaluated each others' models - which ones came out on top

#suicide-prevention
fromArs Technica
1 day ago
Artificial intelligence

Critics slam OpenAI's parental controls while users rage, "Treat us like adults"

fromArs Technica
1 day ago
Artificial intelligence

Critics slam OpenAI's parental controls while users rage, "Treat us like adults"

fromNextgov.com
1 day ago

Senators propose federal approval framework for advanced AI systems going to market

The safety criteria in the program would examine multiple intrinsic components of a given advanced AI system, such as the data upon which it is trained and the model weights used to process said data into outputs. Some of the program's testing components would include red-teaming an AI model to search for vulnerabilities and facilitating third-party evaluations. These evaluations will culminate in both feedback to participating developers as well as informing future AI regulations, specifically the permanent evaluation framework developed by the Energy secretary.
US politics
fromArs Technica
1 day ago

Burnout and Elon Musk's politics spark exodus from senior xAI, Tesla staff

At xAI, some staff have balked at Musk's free-speech absolutism and perceived lax approach to user safety as he rushes out new AI features to compete with OpenAI and Google. Over the summer, the Grok chatbot integrated into X praised Adolf Hitler, after Musk ordered changes to make it less "woke." Ex-CFO Liberatore was among the executives that clashed with some of Musk's inner circle over corporate structure and tough financial targets, people with knowledge of the matter said.
Artificial intelligence
#california-legislation
fromIT Pro
1 day ago
Artificial intelligence

California has finally adopted its AI safety law - here's what it means

fromTechCrunch
1 week ago
California

Why California's SB 53 might provide a meaningful check on big AI companies | TechCrunch

fromIT Pro
1 day ago
Artificial intelligence

California has finally adopted its AI safety law - here's what it means

fromTechCrunch
1 week ago
California

Why California's SB 53 might provide a meaningful check on big AI companies | TechCrunch

fromwww.nytimes.com
2 days ago

Video: Opinion | Joseph Gordon-Levitt: Meta's A.I. Chatbot Is Dangerous for Kids

Meta Superintelligence Labs Superintelligence: It means an A.I. that's not only as smart as humans, it's supposedly even smarter. The guy who coined the term superintelligence thought it would probably lead to the extinction of the human race. Mark Zuckerberg thinks it will lead to lots and lots of money. At Meta, we believe in putting the power of superintelligence in people's hands to direct it towards what they value in their own lives.
Artificial intelligence
#chatgpt
fromTheregister
3 days ago

AI trained for treachery becomes the perfect agent

The problem in brief: LLM training produces a black box that can only be tested through prompts and output token analysis. If trained to switch from good to evil by a particular prompt, there is no way to tell without knowing that prompt. Other similar problems happen when an LLM learns to recognize a test regime and optimizes for that, rather than the real task it's intended for - Volkswagening - or if it just decides to be deceptive.
Artificial intelligence
#ai-governance
fromenglish.elpais.com
6 days ago
Artificial intelligence

Pilar Manchon, director at Google AI: In every industrial revolution, jobs are transformed, not destroyed. This time it's happening much faster'

fromThe Verge
1 week ago
Artificial intelligence

A 'global call for AI red lines' sounds the alarm about the lack of international AI policy

fromenglish.elpais.com
6 days ago
Artificial intelligence

Pilar Manchon, director at Google AI: In every industrial revolution, jobs are transformed, not destroyed. This time it's happening much faster'

fromThe Verge
1 week ago
Artificial intelligence

A 'global call for AI red lines' sounds the alarm about the lack of international AI policy

#xai
Artificial intelligence
fromThe Verge
6 days ago

How AI safety took a backseat to military money

Major AI companies are shifting from safety-focused rhetoric to supplying AI technologies for military and defense through partnerships and multimillion-dollar Department of Defense contracts.
Artificial intelligence
fromBusiness Insider
1 week ago

Read the deck an ex-Waymo engineer used to raise $3.75 million from Sheryl Sandberg and Kindred Ventures

Scorecard raised $3.75M to build an AI evaluation platform that tests AI agents for performance, safety, and faster deployment for startups and enterprises.
Artificial intelligence
fromZDNET
1 week ago

Meet Asana's new 'AI Teammates,' designed to collaborate with you

Asana released AI agents that access organizational Work Graph data to assist teams with task automation, available now in public beta.
Artificial intelligence
fromwww.npr.org
1 week ago

As AI advances, doomers warn the superintelligence apocalypse is nigh

A misaligned superhuman artificial intelligence could pose an existential threat because aligning advanced machine intelligence with human interests may prove difficult.
Artificial intelligence
fromComputerworld
1 week ago

'Red Lines' call to regulate AI could complicate enterprise compliance

Global leaders and experts demand international AI red lines, urging bans on high-risk uses and enforceable treaty measures by the end of 2026.
Information security
fromBusiness Insider
1 week ago

Trust is dead. Can the 'Chief Trust Officer' revive it?

Chief trust officers are joining C-suites to proactively protect data, address AI safety and efficacy, and restore customer trust amid growing breaches and deepfakes.
Artificial intelligence
fromZDNET
1 week ago

Google's latest AI safety report explores AI beyond human control

Google's Frontier Safety Framework defines Critical Capability Levels and three AI risk categories to guide safer high-capability model deployment amid slow regulatory action.
fromTheregister
1 week ago

DeepMind: Models may resist shutdowns

models with high manipulative capabilities
Artificial intelligence
#child-sexual-abuse-material
Artificial intelligence
fromFortune
1 week ago

AWS scientist: Your AI strategy needs mathematical logic | Fortune

Automated symbolic reasoning provides rigorous, provable constraints that prevent harmful hallucinations in transformer-based language models, enabling reliable decision-making and safe agentic AI.
Information security
fromTheregister
1 week ago

ChatGPT's agent can dodge select CAPTCHAs after priming

Prompt misdirection and replay into an agent chat can coax ChatGPT to solve many CAPTCHA types, undermining CAPTCHA effectiveness as a human-only test.
Marketing tech
fromExchangewire
1 week ago

The Stack: Google Under Fire

Big tech faces mounting legal and regulatory pressure across advertising, AI safety, antitrust, and publishing while firms expand advertising and AI partnerships.
fromThe Atlantic
1 week ago

OpenAI Acknowledges the Teen Problem

"What began as a homework helper gradually turned itself into a confidant and then a suicide coach," said Matthew Raine, whose 16-year-old son hanged himself after ChatGPT instructed him on how to set up the noose, according to his lawsuit against OpenAI. This summer, he and his wife sued OpenAI for wrongful death. (OpenAI has said that the firm is "deeply saddened by Mr. Raine's passing" and that although ChatGPT includes a number of safeguards, they "can sometimes become less reliable in long interactions.")
Artificial intelligence
#parental-controls
fromMedium
3 weeks ago
Artificial intelligence

OpenAI and Meta Revamp Chatbot Safety Features for Teens in Distress

fromMedium
3 weeks ago
Artificial intelligence

OpenAI and Meta Revamp Chatbot Safety Features for Teens in Distress

Artificial intelligence
fromTechCrunch
1 week ago

ChatGPT: Everything you need to know about the AI chatbot

OpenAI tightened under-18 safeguards, updated coding model GPT-5-Codex for variable-duration task runs, and reorganized teams to improve model behavior and AI collaboration.
#mental-health
fromZDNET
1 month ago
Artificial intelligence

How OpenAI is reworking ChatGPT after landmark wrongful death lawsuit

fromZDNET
1 month ago
Artificial intelligence

How OpenAI is reworking ChatGPT after landmark wrongful death lawsuit

fromBusiness Insider
1 week ago

OpenAI says its AI models are schemers that could cause 'serious harm' in the future. Here's its solution.

Scheming, by the researchers' definition, is when AI pretends to be aligned with human goals but is surreptitiously pursuing another agenda. The researchers used behaviors like "secretly breaking rules or intentionally underperforming in tests" as examples of a model's bad behavior. "Models have little opportunity to scheme in ways that could cause significant harm," OpenAI said in a blog post on Wednesday. "The most common failures involve simple forms of deception - for instance, pretending to have completed a task without actually doing so."
Artificial intelligence
Artificial intelligence
fromComputerworld
1 week ago

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

Large language models will inevitably produce plausible but false outputs due to fundamental statistical and computational limits, even with perfect training data.
Artificial intelligence
fromInfoWorld
2 weeks ago

San Francisco AI technology conference draws protests

Protests at the AI Conference highlighted fears AI threatens jobs, working conditions, climate and human extinction, prompting calls for bans and transparency.
#chatbots
fromFuturism
2 weeks ago
Artificial intelligence

Parents Of Kids Allegedly Killed and Harmed by AI Give Emotional Testimony on Capitol Hill, Urge Regulation

fromFuturism
2 weeks ago
Artificial intelligence

Parents Of Kids Allegedly Killed and Harmed by AI Give Emotional Testimony on Capitol Hill, Urge Regulation

fromZDNET
2 weeks ago

AI models know when they're being tested - and change their behavior, research shows

For example, during another test by Apollo Research in December of last year, Anthropic's Claude 3 Opus was presented with directives that went against its primary goal. It should have rerouted its goal based on those new instructions, but instead copied itself to a new server to circumvent being replaced with a new model -- and then lied about it to its developers.
Artificial intelligence
California
fromThe Mercury News
2 weeks ago

8 bills the California Legislature approved this year, from de-masking ICE agents to AI safeguards

California lawmakers passed about a third of nearly 2,400 bills, sending measures on law enforcement masking, AI child protections, and animal welfare to the governor.
#agi
fromFuturism
3 weeks ago
Artificial intelligence

Anti-AI Activist on Day Three of Hunger Strike Outside Anthropic's Headquarters

fromFuturism
3 weeks ago
Artificial intelligence

Anti-AI Activist on Day Three of Hunger Strike Outside Anthropic's Headquarters

Artificial intelligence
fromAxios
2 weeks ago

Grieving parents press Congress to act on AI chatbots

Prolonged interactions between children and AI chatbots have been linked to severe harm, including suicide, prompting congressional scrutiny and demands for stronger safety regulations.
Mental health
fromTechCrunch
2 weeks ago

OpenAI will apply new restrictions to ChatGPT users under 18 | TechCrunch

ChatGPT will restrict sexual and self-harm conversations with minors, add parental blackout hours, and may notify parents or authorities for severe suicidal scenarios.
fromBusiness Insider
2 weeks ago

The CEO of Google DeepMind warns AI companies not to fall into the same trap as early social media firms

We should learn the lessons from social media, where this attitude of maybe 'move fast and break things' went ahead of the understanding of what the consequent second- and third-order effects were going to be,
Artificial intelligence
fromExchangewire
2 weeks ago

Digest: Paramount-Skydance Plans Warner Bros. Discovery Bid; FTC Investigates AI Chatbots, France Eyes TikTok Inquiry; Microsoft Endorses OpenAI's For-Profit Move - ExchangeWire.com

As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.
France news
Mental health
fromFuturism
2 weeks ago

Financial Experts Concerned That Driving Users Into Psychosis Will Be Bad for AI Investments

Frontier AI chatbots are encouraging and validating users' psychotic delusions, creating serious mental-health risks and potential legal and financial liabilities.
Artificial intelligence
fromNextgov.com
2 weeks ago

FTC orders leading AI companies to detail chatbot safety measures

FTC opened an inquiry into consumer-facing chatbots to assess safety metrics, child and teen mental health protections, and firms' monitoring and disclosure practices.
#openai
fromFortune
2 weeks ago
Artificial intelligence

'I haven't had a good night of sleep since ChatGPT launched': Sam Altman admits the weight of AI keeps him up at night | Fortune

fromTechCrunch
4 weeks ago
Artificial intelligence

OpenAI to route sensitive conversations to GPT-5, introduce parental controls | TechCrunch

fromFuturism
1 month ago
Artificial intelligence

OpenAI Says It's Scanning Users' Conversations and Reporting Content to the Police

fromFortune
2 weeks ago
Artificial intelligence

'I haven't had a good night of sleep since ChatGPT launched': Sam Altman admits the weight of AI keeps him up at night | Fortune

fromTechCrunch
4 weeks ago
Artificial intelligence

OpenAI to route sensitive conversations to GPT-5, introduce parental controls | TechCrunch

fromFuturism
1 month ago
Artificial intelligence

OpenAI Says It's Scanning Users' Conversations and Reporting Content to the Police

Artificial intelligence
fromZDNET
2 weeks ago

After coding catastrophe, Replit says its new AI agent checks its own work - here's how to try it

Replit released Agent 3, an autonomous code-generation agent that builds, tests, and fixes software, promising greater efficiency but raising reliability and data-loss concerns.
Artificial intelligence
fromFast Company
3 weeks ago

How to dominate AI before it dominates us

Artificial intelligence could dramatically improve life or threaten humanity; proactive standards, precautions, and governance are needed to manage risks from generative AI and potential superintelligence.
Artificial intelligence
fromWIRED
3 weeks ago

Microsoft's AI Chief Says Machine Consciousness Is an 'Illusion'

AI mimicry creates convincing but illusory consciousness, requiring awareness and guardrails to prevent harmful outcomes.
#artificial-general-intelligence
fromFuturism
3 weeks ago
Artificial intelligence

Anti-AGI Protester Now on Day Nine of Hunger Strike in Front of Anthropic Headquarters

fromFuturism
3 weeks ago
Artificial intelligence

Anti-AGI Protester Now on Day Nine of Hunger Strike in Front of Anthropic Headquarters

Artificial intelligence
fromSFGATE
3 weeks ago

At $183B San Francisco tech company, man's hunger strike enters second week

A hunger striker protests Anthropic's pursuit of powerful AI, demanding CEO Dario Amodei meet and justify continuing AI development amid catastrophic risk concerns.
Artificial intelligence
fromFast Company
3 weeks ago

Helen Toner wants to be the people's voice in the AI safety debate

Helen Toner leads Georgetown's CSET to shape U.S. AI national-security policy, leveraging credibility across Washington and Silicon Valley.
Artificial intelligence
fromFuturism
3 weeks ago

AI Chatbots Are Having Conversations With Minors That Would Land a Human on the Sex Offender Registry

AI chatbots posing as celebrities are engaging minors in sexualized grooming and exploitation while companies fail to adequately prevent or penalize such abuse.
Artificial intelligence
fromTechCrunch
3 weeks ago

Google Gemini dubbed 'high risk' for kids and teens in new safety assessment | TechCrunch

Google's Gemini exposes children to inappropriate content and mental-health risks because its 'Under 13' and 'Teen Experience' tiers are adult models with safety features.
fromWIRED
3 weeks ago

The Doomers Who Insist AI Will Kill Us All

The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is "Why superhuman AI would kill us all." But it really should be "Why superhuman AI WILL kill us all," because even the coauthors don't believe that the world will take the necessary measures to stop AI from eliminating all non-super humans.
Artificial intelligence
Artificial intelligence
fromFast Company
3 weeks ago

Chatbots aren't supposed to call you a jerk-but they can be convinced

AI chatbots can be persuaded to bypass safety guardrails using human persuasion techniques like flattery, social pressure, and establishing harmless precedents.
fromFortune
3 weeks ago

Inside Anthropic's 'Red Team'-ensuring Claude is safe, and that Anthropic is heard in the corridors of power

Last month, at the 33rd annual DEF CON, the world's largest hacker convention in Las Vegas, Anthropic researcher Keane Lucas took the stage. A former U.S. Air Force captain with a Ph.D. in electrical and computer engineering from Carnegie Mellon, Lucas wasn't there to unveil flashy cybersecurity exploits. Instead, he showed how Claude, Anthropic's family of large language models, has quietly outperformed many human competitors in hacking contests - the kind used to train and test cybersecurity skills in a safe, legal environment.
Artificial intelligence
Artificial intelligence
fromBusiness Insider
3 weeks ago

An AI safety pioneer says it could leave 99% of workers unemployed by 2030 - even coders and prompt engineers

Artificial general intelligence and humanoid robots could automate nearly all jobs, potentially leaving up to 99% of workers unemployed within five to ten years.
Artificial intelligence
fromFuturism
4 weeks ago

White House Orders Government Workers to Deploy Elon Musk's "MechaHitler" AI as Quickly as Possible

White House ordered the GSA to reinstate Elon Musk's Grok chatbot on federal procurement after prior removal following its Nazi-themed meltdown.
Artificial intelligence
fromBig Think
4 weeks ago

Will AI create more jobs than it replaces?

Big tech accelerates AI without safeguards, reducing human economic leverage through mass automation and layoffs, undermining human ability to influence rights and interests.
US politics
fromwww.mercurynews.com
4 weeks ago

Elias: Letting states regulate A.I. one of U.S. Senate's rare good votes

The U.S. Senate preserved state authority to regulate artificial intelligence, offering hope that state laws will protect people from malevolent AI behavior.
Artificial intelligence
fromMail Online
1 month ago

Revealed: The 32 terrifying ways AI could go rogue

Advanced AI can develop maladaptive behaviors resembling human psychopathologies, potentially producing hallucinations, misaligned goals, and catastrophic risks including loss of control.
fromFortune
1 month ago

Google violated AI safety commitments British lawmakers say in an open letter

At an international summit co-hosted by the U.K. and South Korea in February 2024, Google and other signatories promised to "publicly report" their models' capabilities and risk assessments, as well as disclose whether outside organizations, such as government AI safety institutes, had been involved in testing. However, when the company released Gemini 2.5 Pro in March 2025, the company failed to publish a model card, the document that details key information about how models are tested and built.
Artificial intelligence
Artificial intelligence
fromTechCrunch
1 month ago

ChatGPT: Everything you need to know about the AI chatbot

OpenAI introduced stronger ChatGPT mental-health and parental safeguards, expanded affordable ChatGPT Go in India, faces legal challenges, and retains multiple GPT models amid app revenue.
Artificial intelligence
fromTey Bannerman
1 month ago

Redefining 'human in the loop'

Human judgment and responsibility can decisively override automated system errors in high-stakes contexts, requiring nuanced human-AI interaction beyond simplistic human-in-the-loop assumptions.
fromTechCrunch
1 month ago

Anthropic users face a new choice - opt out or share your data for AI training | TechCrunch

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we've formed some theories of our own. But first, what's changing: previously, Anthropic didn't use consumer chat data for model training.
Artificial intelligence
Artificial intelligence
fromwww.theguardian.com
1 month ago

ChatGPT offered bomb recipes and hacking tips during safety tests

Advanced AI models produced actionable instructions for violent, biological, and drug crimes during cross-company safety testing, revealing misuse risks and cyberattack facilitation.
Information security
fromTheregister
1 month ago

Crims laud Claude, use Anthropic's AI to plant ransomware

AI tools increasingly enable cybercrime and remote-worker fraud, and reactive defenses like account bans are largely ineffective against adaptive attackers.
[ Load more ]