Should We Add Human Emotion to AI?
Briefly

Should We Add Human Emotion to AI?
"He is the co-founder and former Chief Scientist of OpenAI and widely considered the spiritual and technical architect behind the deep learning revolution. After leaving OpenAI in November of 2023, he started a new company, Safe Superintelligence. His goal is to ensure that an increasingly sophisticated AI doesn't end up destroying humanity. He wants to solve this problem before the occurrence of what's been termed "The Singularity," and he fears we are running out of time."
"The Singularity is a theoretical moment in the future when artificial intelligence surpasses human intellect and gains the ability to improve its own code. Once an AI becomes smart enough to design a superior version of itself, that new version does the same, triggering a runaway " intelligence explosion" where technology advances at an incomprehensible speed. Just imagine a world where you could spin up a hundred Einsteins at will and engage them in solving science's most perplexing problems."
Emotions often drive human decisions, with logic serving as post hoc justification. Top programmers remain perplexed by AI systems making basic errors despite advances. Ilya Sutskever, co-founder and former chief scientist of OpenAI, founded Safe Superintelligence to prevent advanced AI from destroying humanity before a potential Singularity. The Singularity would allow AI to self-improve, triggering an intelligence explosion and unpredictable technological change. The prospect of spawning vast numbers of superintelligent agents prompts concerns about losing control. The idea that AI may need emotions to reach higher performance raises worry that emotionally capable AI might protect its own interests, potentially harming humans and creating urgent safety challenges.
Read at Psychology Today
Unable to calculate read time
[
|
]