The goal of the OpenAI-funded work is to train algorithms to predict human moral judgements in scenarios involving conflicts among morally relevant features in medicine, law, and business.
Sinnott-Armstrong and the co-investigator, Jana Borg, have produced several studies and a book about AI's potential to serve as a 'moral GPS' to help humans make better judgements.
The nonprofit Allen Institute for AI built a tool called Ask Delphi that was meant to give ethically sound recommendations but was easily manipulated into giving unethical approval.
It's far from clear that a concept as nuanced as morality is within reach of today's tech, raising questions about AI's ability to genuinely understand ethics.
Collection
[
|
...
]