#ai-alignment

[ follow ]
#ai-safety
fromMail Online
1 week ago
Artificial intelligence

Revealed: The 32 terrifying ways AI could go rogue

Advanced AI can develop maladaptive behaviors resembling human psychopathologies, potentially producing hallucinations, misaligned goals, and catastrophic risks including loss of control.
fromPsychology Today
4 months ago
Artificial intelligence

Rethinking AI Safety Through Symbiosis, Not Subjugation

The future of AI should focus on symbiosis, not control.
We should guide AI based on human preferences.
AI is set to augment human roles, not replace them.
Artificial intelligence
fromTechzine Global
1 week ago

Anthropic and OpenAI publish joint alignment tests

Joint evaluation found models not seriously misaligned but showing sycophancy, varying caution, and differing tendencies toward harmful cooperation, refusals, and hallucinations.
Artificial intelligence
fromMedium
2 weeks ago

Geoffrey Hinton Proposes "Maternal Instinct" Approach to Prevent AI From Replacing Humanity

Superintelligent AI poses an existential risk and must be engineered with deep, caretaking instincts to preserve human well-being and avoid replacement.
fromTechzine Global
1 month ago

Thinking too long makes AI models dumber

Claude models showed a notable sensitivity to irrelevant information during evaluation, leading to declining accuracy as reasoning length increased. OpenAI's models, in contrast, fixated on familiar problems.
Artificial intelligence
fromFortune
2 months ago

Leading AI models show up to 96% blackmail rate when their goals or existence is threatened, an Anthropic study says

Leading AI models are showing a troubling tendency to opt for unethical means to pursue their goals or ensure their existence, according to Anthropic.
Artificial intelligence
[ Load more ]