Yann LeCun proposes that AI must incorporate fundamental guardrails such as 'submission to humans' and 'empathy' to ensure safety and protect humanity. He supports Geoffrey Hinton's idea of embedding 'maternal instincts' within AI systems. LeCun emphasizes the need for hardwired objectives that drive AI behaviors towards fulfilling human-defined goals. Simple safety guardrails, like prohibiting harmful actions, are essential. He draws parallels between human and animal instincts and advocates for an 'objective-driven AI' architecture to ensure AI operates within defined moral and ethical boundaries.
"Geoff is basically proposing a simplified version of what I've been saying for several years: hardwire the architecture of AI systems so that the only actions they can take are towards completing objectives we give them, subject to guardrails."
"We need to make them have empathy toward us. Otherwise, humans are going to be history."
"Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans."
"Submission to humans and empathy should be key guardrails for AI to protect humans from potential future harm."
Collection
[
|
...
]