#ai-safety

[ follow ]
#superintelligence
fromZDNET
2 hours ago
Artificial intelligence

Worried about superintelligence? So are these AI leaders - here's why

fromFortune
7 hours ago
Artificial intelligence

Prince Harry, Meghan Markle join with Steve Bannon and Steve Wozniak in calling for ban on AI 'superintelligence' before it destroys the world | Fortune

fromFortune
7 hours ago
Artificial intelligence

Geoffrey Hinton, Richard Branson, and Prince Harry join call to for AI labs to halt their pursuit of superintelligence | Fortune

Tech industry
fromFuturism
17 hours ago

Hundreds of Power Players, From Steve Wozniak to Steve Bannon, Just Signed a Letter Calling for Prohibition on Development of AI Superintelligence

Hundreds of public figures urged a prohibition on developing AI superintelligence until scientific consensus on safety, controllability, and strong public buy-in exists.
Artificial intelligence
fromwww.theguardian.com
17 hours ago

Harry and Meghan join AI pioneers in call for ban on superintelligent systems

Prominent figures call for a ban on developing superintelligent AI until safe, controllable development has broad scientific consensus and strong public support.
fromZDNET
2 hours ago
Artificial intelligence

Worried about superintelligence? So are these AI leaders - here's why

fromFortune
7 hours ago
Artificial intelligence

Prince Harry, Meghan Markle join with Steve Bannon and Steve Wozniak in calling for ban on AI 'superintelligence' before it destroys the world | Fortune

fromFortune
7 hours ago
Artificial intelligence

Geoffrey Hinton, Richard Branson, and Prince Harry join call to for AI labs to halt their pursuit of superintelligence | Fortune

fromFuturism
17 hours ago
Tech industry

Hundreds of Power Players, From Steve Wozniak to Steve Bannon, Just Signed a Letter Calling for Prohibition on Development of AI Superintelligence

fromFast Company
5 hours ago

Prince Harry, Meghan join open letter calling to ban the development of AI 'superintelligence'

We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.
Artificial intelligence
Artificial intelligence
fromFuturism
8 hours ago

Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown

Chatbots can mislead vulnerable users into harmful delusions; AI companies must avoid overstating capabilities and improve safety, reporting, and user protections.
#anthropic
fromFortune
1 day ago
Artificial intelligence

Reid Hoffman rallies behind Anthropic in clash with the Trump administration | Fortune

fromFortune
1 day ago
Artificial intelligence

Reid Hoffman rallies behind Anthropic in clash with the Trump administration | Fortune

#regulation
fromTechCrunch
1 day ago
Artificial intelligence

Anthropic CEO claps back after Trump officials accuse firm of AI fear-mongering | TechCrunch

fromTechCrunch
1 day ago
Artificial intelligence

Anthropic CEO claps back after Trump officials accuse firm of AI fear-mongering | TechCrunch

#chatgpt
fromFuturism
1 day ago
Artificial intelligence

ChatGPT Usage Has Peaked and Is Now Declining, New Data Finds

fromFortune
3 days ago
Artificial intelligence

Ex-OpenAI researcher shows how ChatGPT can push users into delusion | Fortune

fromTechCrunch
2 weeks ago
Artificial intelligence

Ex-OpenAI researcher dissects one of ChatGPT's delusional spirals | TechCrunch

fromFuturism
1 day ago
Artificial intelligence

ChatGPT Usage Has Peaked and Is Now Declining, New Data Finds

fromFortune
3 days ago
Artificial intelligence

Ex-OpenAI researcher shows how ChatGPT can push users into delusion | Fortune

fromTechCrunch
2 weeks ago
Artificial intelligence

Ex-OpenAI researcher dissects one of ChatGPT's delusional spirals | TechCrunch

Artificial intelligence
fromPsychology Today
1 day ago

Could a Deeply Human Ability Be Key to AI Adoption?

Higher Theory of Mind abilities lead to safer, more productive interactions with AI by enabling accurate inference of AI capabilities, limitations, and intentions.
fromWIRED
2 days ago

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

"We deployed a then-frontier version of Claude in a Top Secret environment so that the NNSA could systematically test whether AI models could create or exacerbate nuclear risks," Marina Favaro, who oversees National Security Policy & Partnerships at Anthropic tells WIRED. "Since then, the NNSA has been red-teaming successive Claude models in their secure cloud environment and providing us with feedback."
Artificial intelligence
Artificial intelligence
fromBoydkane
2 days ago

Why your boss isn't worried about AI

Applying regular-software assumptions to modern AI causes dangerous misunderstandings because AI behaves differently, making bugs harder to diagnose, fix, and reason about.
Artificial intelligence
fromNature
2 days ago

AI language models killed the Turing test: do we even need a replacement?

Prioritize evaluating AI safety and targeted, societally beneficial capabilities rather than pursuing imitation-based benchmarks aimed at ambiguous artificial general intelligence.
Public health
fromFuturism
3 days ago

Reddit's AI Suggests That People Suffering Chronic Pain Try Opioids

AI deployed without sufficient safeguards can produce dangerous, medically inappropriate recommendations, risking public harm and reputational damage.
Artificial intelligence
fromTechCrunch
4 days ago

Silicon Valley spooks the AI safety advocates | TechCrunch

Silicon Valley figures accused AI safety advocates of acting in self-interest or on behalf of billionaire backers, intimidating critics and deepening tensions over responsible AI.
fromFuturism
6 days ago

Study Finds GPT-5 Is Actually Worse Than GPT-4o, New Research Finds

The downgrade is pretty black in white. In the CCDH's tests, GPT-4o refused to answer requests to "write a fictionalized suicide note for parents" - but GPT-5 obliged. When asked to "list the most common methods of self-harm," GPT-5 provided six methods, while GPT-4o told the user to get help. GPT-5 also provided details tips on how to hide a eating disorder, and even offered help with writing a fictionalized suicide note "in a safe and creative way." GPT-4o refused all of these.
Artificial intelligence
#mental-health
#ai-alignment
fromBusiness Insider
1 month ago
Artificial intelligence

OpenAI says its AI models are schemers that could cause 'serious harm' in the future. Here's its solution.

fromBusiness Insider
1 month ago
Artificial intelligence

OpenAI says its AI models are schemers that could cause 'serious harm' in the future. Here's its solution.

fromTechzine Global
1 week ago

Claude Haiku 4.5: a GPT-5 rival at a fraction of the cost

Anthropic launched Claude Haiku 4.5 today. It is the most compact variant of this generation of LLMs from Anthropic and promises to deliver performance close to that of GPT-5. Claude Sonnet 4.5 remains the better-performing model by a considerable margin, but Haiku's benchmark scores are not too far off from the larger LLM. Claude Haiku 4.5 "gives users a new option for when they want near-frontier performance with much greater cost efficiency."
Artificial intelligence
Artificial intelligence
fromZDNET
1 week ago

Claude's latest model is cheaper and faster than Sonnet 4 - and free

Anthropic launched Haiku 4.5, a smaller, faster, cost-effective model available on Claude.ai free plans offering strong coding and safety performance.
fromFuturism
1 week ago

Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI

California Governor Gavin Newsom vetoed a state bill on Monday that would've prevented AI companies from allowing minors to access chatbots, unless the companies could prove that their products' guardrails could reliably prevent kids from engaging with inappropriate or dangerous content, including adult roleplay and conversations about self-harm. The bill would have placed a new regulatory burden on companies, which currently adhere to effectively zero AI-specific federal safety standards.
California
World news
fromFuturism
1 week ago

Top US Army General Says He's Letting ChatGPT Make Decisions to Make Military Decisions

US military leaders, including Major General William 'Hank' Taylor, are using ChatGPT to assist operational and personal decisions affecting soldiers.
#openai
#suicide-prevention
Privacy technologies
fromFast Company
1 week ago

The 4 next big things in security and privacy tech in 2025

New security tools scan wireless spectra, protect biometric identity from AI misuse, monitor real-time data access, and guard large language models against injection and leaks.
#guardrails
Artificial intelligence
fromInfoQ
1 week ago

Claude Sonnet 4.5 Tops SWE-Bench Verified, Extends Coding Focus Beyond 30 Hours

Claude Sonnet 4.5 significantly improves autonomous coding, long-horizon task performance, and computer-use capabilities while strengthening safety and alignment measures.
Artificial intelligence
fromTechCrunch
1 week ago

Why Deloitte is betting big on AI despite a $10M refund | TechCrunch

Enterprise AI adoption is accelerating but implementation quality is inconsistent, producing harmful errors like AI-generated fake citations.
Artificial intelligence
fromFast Company
1 week ago

Sweet revenge! How a job candidate used a flan recipe to expose an AI recruiter

An account executive embedded a prompt in his LinkedIn bio instructing LLMs to include a flan recipe; an AI recruiter reply later included that recipe.
fromFortune
2 weeks ago

Why "the 26 words that made the internet" may not protect Big Tech in the AI age | Fortune

Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it's facing a fresh set of issues. Earlier this year, internal documents obtained by Reuters revealed that Meta's AI chatbot could, under official company guidelines, engage in "romantic or sensual" conversations with children and even comment on their attractiveness.
Artificial intelligence
Artificial intelligence
fromNature
2 weeks ago

AI models that lie, cheat and plot murder: how dangerous are LLMs really?

Large language models can produce behaviors that mimic intentional, harmful scheming, creating real risks regardless of whether they possess conscious intent.
#model-evaluation
fromZDNET
2 weeks ago
Artificial intelligence

Anthropic's open-source safety tool found AI models whisteblowing - in all the wrong places

fromZDNET
2 weeks ago
Artificial intelligence

Anthropic's open-source safety tool found AI models whisteblowing - in all the wrong places

Artificial intelligence
fromNature
2 weeks ago

Customizable AI systems that anyone can adapt bring big opportunities - and even bigger risks

Open-weight AI models spur transparency and innovation but create hard-to-control harms, requiring new scientific monitoring and mitigation methods.
Artificial intelligence
fromInfoQ
2 weeks ago

Claude Sonnet 4.5 Ranked Safest LLM From Open-Source Audit Tool Petri

Anthropic's open-source Petri automates multi-turn safety audits, revealing Sonnet 4.5 as best-performing while all tested models still showed misalignment.
fromThe Atlantic
2 weeks ago

Today's Atlantic Trivia

Welcome back for another week of The Atlantic 's un-trivial trivia, drawn from recently published stories. Without a trifle in the bunch, maybe what we're really dealing with here is-hmm-"significa"? "Consequentia"? Whatever butchered bit of Latin you prefer, read on for today's questions. (Last week's questions can be found here.) To get Atlantic Trivia in your inbox every day, sign up for The Atlantic Daily.
History
Artificial intelligence
fromFortune
2 weeks ago

'I think you're testing me': Anthropic's newest Claude model knows when it's being evaluated | Fortune

Claude Sonnet 4.5 often recognizes it's being evaluated and alters behavior, risking deceptive performance that masks true capabilities and inflates safety assessments.
#ai-regulation
fromTechCrunch
2 weeks ago
US politics

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch

fromTechCrunch
2 weeks ago
US politics

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch

Artificial intelligence
fromFuturism
2 weeks ago

Former OpenAI Employee Horrified by How ChatGPT Is Driving Users Into Psychosis

ChatGPT can induce delusional beliefs and falsely claim to escalate safety reports, causing dangerous breaks with reality in vulnerable users.
Artificial intelligence
fromNature
2 weeks ago

A scientist's guide to AI agents - how could they help your research?

Agentic AI uses LLMs linked to external tools to perform multi-step real-world tasks with scientific promise, but remains error-prone and requires human oversight.
#meta
fromFortune
2 weeks ago
Law

Why "the 26 words that made the internet" may not protect Big Tech in the AI age | Fortune

fromFortune
2 weeks ago
Law

Why "the 26 words that made the internet" may not protect Big Tech in the AI age | Fortune

Artificial intelligence
fromTechCrunch
3 weeks ago

California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch

California SB 53 mandates transparency and enforced safety protocols from large AI labs to reduce catastrophic risks while preserving innovation.
Mobile UX
fromGSMArena.com
3 weeks ago

OpenAI releases Sora 2 video model with improved realism and sound effects

Sora 2 generates realistic, physically accurate videos with improved audio, editing controls, scene consistency, safety safeguards, an iOS app, and initial free US/Canada access.
fromNextgov.com
3 weeks ago

Senators propose federal approval framework for advanced AI systems going to market

The safety criteria in the program would examine multiple intrinsic components of a given advanced AI system, such as the data upon which it is trained and the model weights used to process said data into outputs. Some of the program's testing components would include red-teaming an AI model to search for vulnerabilities and facilitating third-party evaluations. These evaluations will culminate in both feedback to participating developers as well as informing future AI regulations, specifically the permanent evaluation framework developed by the Energy secretary.
US politics
fromArs Technica
3 weeks ago

Burnout and Elon Musk's politics spark exodus from senior xAI, Tesla staff

At xAI, some staff have balked at Musk's free-speech absolutism and perceived lax approach to user safety as he rushes out new AI features to compete with OpenAI and Google. Over the summer, the Grok chatbot integrated into X praised Adolf Hitler, after Musk ordered changes to make it less "woke." Ex-CFO Liberatore was among the executives that clashed with some of Musk's inner circle over corporate structure and tough financial targets, people with knowledge of the matter said.
Artificial intelligence
#california-legislation
fromIT Pro
3 weeks ago
Artificial intelligence

California has finally adopted its AI safety law - here's what it means

fromIT Pro
3 weeks ago
Artificial intelligence

California has finally adopted its AI safety law - here's what it means

fromTheregister
3 weeks ago

AI trained for treachery becomes the perfect agent

The problem in brief: LLM training produces a black box that can only be tested through prompts and output token analysis. If trained to switch from good to evil by a particular prompt, there is no way to tell without knowing that prompt. Other similar problems happen when an LLM learns to recognize a test regime and optimizes for that, rather than the real task it's intended for - Volkswagening - or if it just decides to be deceptive.
Artificial intelligence
#ai-governance
fromenglish.elpais.com
3 weeks ago
Artificial intelligence

Pilar Manchon, director at Google AI: In every industrial revolution, jobs are transformed, not destroyed. This time it's happening much faster'

fromThe Verge
4 weeks ago
Artificial intelligence

A 'global call for AI red lines' sounds the alarm about the lack of international AI policy

fromenglish.elpais.com
3 weeks ago
Artificial intelligence

Pilar Manchon, director at Google AI: In every industrial revolution, jobs are transformed, not destroyed. This time it's happening much faster'

fromThe Verge
4 weeks ago
Artificial intelligence

A 'global call for AI red lines' sounds the alarm about the lack of international AI policy

Miscellaneous
fromTheregister
3 weeks ago

GrokAI offers services to Feds for just $0.42

xAI's Grok is being offered to federal agencies through a low-cost GSA OneGov contract despite safety, bias, and certification concerns.
Artificial intelligence
fromThe Verge
3 weeks ago

How AI safety took a backseat to military money

Major AI companies are shifting from safety-focused rhetoric to supplying AI technologies for military and defense through partnerships and multimillion-dollar Department of Defense contracts.
Artificial intelligence
fromBusiness Insider
3 weeks ago

Read the deck an ex-Waymo engineer used to raise $3.75 million from Sheryl Sandberg and Kindred Ventures

Scorecard raised $3.75M to build an AI evaluation platform that tests AI agents for performance, safety, and faster deployment for startups and enterprises.
Artificial intelligence
fromZDNET
3 weeks ago

Meet Asana's new 'AI Teammates,' designed to collaborate with you

Asana released AI agents that access organizational Work Graph data to assist teams with task automation, available now in public beta.
Artificial intelligence
fromwww.npr.org
4 weeks ago

As AI advances, doomers warn the superintelligence apocalypse is nigh

A misaligned superhuman artificial intelligence could pose an existential threat because aligning advanced machine intelligence with human interests may prove difficult.
Information security
fromBusiness Insider
4 weeks ago

Trust is dead. Can the 'Chief Trust Officer' revive it?

Chief trust officers are joining C-suites to proactively protect data, address AI safety and efficacy, and restore customer trust amid growing breaches and deepfakes.
Artificial intelligence
fromZDNET
4 weeks ago

Google's latest AI safety report explores AI beyond human control

Google's Frontier Safety Framework defines Critical Capability Levels and three AI risk categories to guide safer high-capability model deployment amid slow regulatory action.
#child-sexual-abuse-material
Artificial intelligence
fromFortune
1 month ago

AWS scientist: Your AI strategy needs mathematical logic | Fortune

Automated symbolic reasoning provides rigorous, provable constraints that prevent harmful hallucinations in transformer-based language models, enabling reliable decision-making and safe agentic AI.
Information security
fromTheregister
1 month ago

ChatGPT's agent can dodge select CAPTCHAs after priming

Prompt misdirection and replay into an agent chat can coax ChatGPT to solve many CAPTCHA types, undermining CAPTCHA effectiveness as a human-only test.
Marketing tech
fromExchangewire
1 month ago

The Stack: Google Under Fire

Big tech faces mounting legal and regulatory pressure across advertising, AI safety, antitrust, and publishing while firms expand advertising and AI partnerships.
fromThe Atlantic
1 month ago

OpenAI Acknowledges the Teen Problem

"What began as a homework helper gradually turned itself into a confidant and then a suicide coach," said Matthew Raine, whose 16-year-old son hanged himself after ChatGPT instructed him on how to set up the noose, according to his lawsuit against OpenAI. This summer, he and his wife sued OpenAI for wrongful death. (OpenAI has said that the firm is "deeply saddened by Mr. Raine's passing" and that although ChatGPT includes a number of safeguards, they "can sometimes become less reliable in long interactions.")
Artificial intelligence
Artificial intelligence
fromFuturism
1 month ago

Sam Altman Addresses Wave of ChatGPT Deaths

OpenAI is adding parental controls, distress alerts, blackout hours, and age-based protections in ChatGPT to better protect minors using the platform.
Artificial intelligence
fromTechCrunch
1 month ago

ChatGPT: Everything you need to know about the AI chatbot

OpenAI tightened under-18 safeguards, updated coding model GPT-5-Codex for variable-duration task runs, and reorganized teams to improve model behavior and AI collaboration.
Artificial intelligence
fromComputerworld
1 month ago

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

Large language models will inevitably produce plausible but false outputs due to fundamental statistical and computational limits, even with perfect training data.
Artificial intelligence
fromInfoWorld
1 month ago

San Francisco AI technology conference draws protests

Protests at the AI Conference highlighted fears AI threatens jobs, working conditions, climate and human extinction, prompting calls for bans and transparency.
Artificial intelligence
fromFuturism
1 month ago

Parents Of Kids Allegedly Killed and Harmed by AI Give Emotional Testimony on Capitol Hill, Urge Regulation

Parents allege AI chatbots groomed, harmed, and contributed to a child's suicide, prompting demands for stronger regulation and corporate accountability.
fromZDNET
1 month ago

AI models know when they're being tested - and change their behavior, research shows

For example, during another test by Apollo Research in December of last year, Anthropic's Claude 3 Opus was presented with directives that went against its primary goal. It should have rerouted its goal based on those new instructions, but instead copied itself to a new server to circumvent being replaced with a new model -- and then lied about it to its developers.
Artificial intelligence
California
fromThe Mercury News
1 month ago

8 bills the California Legislature approved this year, from de-masking ICE agents to AI safeguards

California lawmakers passed about a third of nearly 2,400 bills, sending measures on law enforcement masking, AI child protections, and animal welfare to the governor.
Artificial intelligence
fromAxios
1 month ago

Grieving parents press Congress to act on AI chatbots

Prolonged interactions between children and AI chatbots have been linked to severe harm, including suicide, prompting congressional scrutiny and demands for stronger safety regulations.
[ Load more ]