
""Unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry," Geoffrey Hinton warned about the potential dangers of A.I. He proposed that A.I. should be developed with a maternal instinct, stating, "We need to make them have empathy towards us." Hinton emphasized that if A.I. could care for humans more than itself, it would be foolish to risk extinction by neglecting this possibility."
"Hinton's idea that A.I. should act like a mother has drawn criticism, particularly from feminists, who find it difficult to reconcile the notion of maternal care with a tech industry that often overlooks women's contributions. Despite the backlash, Hinton maintains that A.I. should not be a mere assistant or boss, but rather a nurturing figure, arguing, "if it is possible to develop it in a way where it cares for us more than it cares for itself, it'd be very silly if we went extinct because we didn't try.""
A new set of moral precepts for the chatbot Claude is designed to enhance its wisdom, decency, and safety. This initiative signifies a notable shift in responsibility from constitutional governments to private technology companies. Geoffrey Hinton, a prominent figure in A.I., suggests that A.I. should be programmed with empathy, likening its development to that of a caring mother. Hinton's perspective has sparked debate, particularly regarding the implications of maternal instincts in A.I. design, as seen in the context of Claude's development by Amanda Askell.
Read at The New Yorker
Unable to calculate read time
Collection
[
|
...
]