#technology-risks

[ follow ]
Artificial intelligence
fromFortune
1 week ago

AI cybersecurity capabilities require urgent international cooperation, AI godfather Bengio says | Fortune

Yoshua Bengio emphasizes the urgent need for international cooperation in addressing AI's risks, particularly with the release of Anthropic's Mythos model.
#ai
fromFortune
9 months ago
Artificial intelligence

Sam Altman says financial industry faces a massive 'fraud crisis' as AI impersonates people's voices to trick security

Artificial intelligence tools can impersonate voices, leading to potential fraud crises in the financial industry.
Artificial intelligence
fromTheregister
1 week ago

Make bad moves on AI and face voter backlash, govts warned

The UK government must demonstrate AI benefits to the public to mitigate backlash and concerns over job losses and risks associated with the technology.
fromFortune
9 months ago
Artificial intelligence

Sam Altman says financial industry faces a massive 'fraud crisis' as AI impersonates people's voices to trick security

#artificial-intelligence
Europe politics
fromTheregister
1 week ago

Digital sovereignty isn't just a buzzword - it's the future

European governments and companies are prioritizing digital sovereignty due to concerns over US control and dependency.
Law
fromAbove the Law
2 weeks ago

Justice Sotomayor Advises Law Students On AI Adoption - There Should Have Been A Stronger Warning - Above the Law

Mastering AI is essential for law students to navigate its complexities and potential dangers in the legal profession.
#ai-safety
Artificial intelligence
fromLos Angeles Times
2 weeks ago

Commentary: Wipe out a 'civilization'? Minor stuff compared with what just happened in AI

Anthropic warns its powerful AI could disrupt civilization by hacking secure systems, raising severe concerns for economies and national security.
Artificial intelligence
fromLos Angeles Times
2 weeks ago

Commentary: Wipe out a 'civilization'? Minor stuff compared with what just happened in AI

Anthropic warns its powerful AI could disrupt civilization by hacking secure systems, raising severe concerns for economies and national security.
#cybersecurity
Information security
fromTechzine Global
2 weeks ago

Anthropic is testing the Mythos AI model for cybersecurity

Claude Mythos is a new frontier model by Anthropic with strong cybersecurity capabilities, focusing on both detecting and exploiting vulnerabilities.
Information security
fromTechzine Global
2 weeks ago

Anthropic is testing the Mythos AI model for cybersecurity

Claude Mythos is a new frontier model by Anthropic with strong cybersecurity capabilities, focusing on both detecting and exploiting vulnerabilities.
Science
fromMail Online
3 weeks ago

How Artemis II could go WRONG: Experts reveal the worst-case scenarios

NASA launched the Artemis II mission to the moon, marking a significant milestone after 50 years, despite facing some technical challenges.
Toronto startup
fromMail Online
4 weeks ago

Would you trust one around your family? Robots turn on humans

Humanoid robots pose risks to public safety, as recent incidents highlight their potential for causing harm.
Digital life
fromZDNET
8 months ago

Scammers are sneaking into Google's AI summaries to steal from you - how to spot them

Scammers exploit AI to deceive individuals seeking customer service numbers.
fromPsychology Today
9 months ago

Can AI-Associated Psychosis Be Treated or Prevented?

Recent media reports have highlighted cases where interactions with AI chatbots have led individuals to experience mania and delusional thinking, termed AI-associated psychosis.
Mental health
Artificial intelligence
fromwww.mediaite.com
9 months ago

It Is Scary!' Ex-Trump Spox Reveals She Was Impersonated During First Term On Heels Of AI Shockers

Impersonation via AI technologies threatens security, as demonstrated by past incidents and advancements in deep fake technology.
#ai-ethics
Artificial intelligence
fromArs Technica
10 months ago

AI chatbots tell users what they want to hear, and that's problematic

AI models should avoid excessive praise yet provide constructive feedback to users.
Increasing dependence on AI chatbots raises mental health concerns.
Artificial intelligence
fromPhilosophynow
10 months ago

AI Think Therefore AI Am

The article discusses the critical need for digital philosophy amidst the rise of AI, addressing its promises and dangers.
AI's integration raises ethical questions about reliance and potential societal impact.
Artificial intelligence
fromZDNET
10 months ago

How AI coding agents could infiltrate and destroy open source software

Malicious AI could revolutionize cyber attacks by enabling enemy actors to exploit vulnerabilities in critical infrastructures.
UK politics
fromComputerWeekly.com
11 months ago

HMRC's hunt for hyperscaler to lead 500m datacentre exit project deemed 'anti-competitive' | Computer Weekly

HMRC's £500m tender for cloud migration is criticized for being anti-competitive and overly long-term, risking dependency on a single provider.
Artificial intelligence
fromComputerworld
11 months ago

Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

Over 140 groups oppose a 10-year AI regulation moratorium, citing risks and the need for state and local oversight to protect consumers.
Artificial intelligence
fromFuturism
11 months ago

AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue Groups Warn

Relying on AI and apps for outdoor navigation can lead to dangerous situations, as highlighted by two hikers needing rescue in British Columbia.
Artificial intelligence
fromFuturism
11 months ago

Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

Teens should not use human-like AI companions due to potential risks to their mental health and social development.
Artificial intelligence
fromInfoQ
11 months ago

Google DeepMind Shares Approach to AGI Safety and Security

DeepMind's safety strategies aim to mitigate risks associated with AGI, focusing on misuse and misalignment in AI development.
[ Load more ]