"Before the Trump regime, the United States miliary expressed interest in developing robots capable of moral reasoning and provided grant money to support such research. Other nations are no doubt also interested."
"While there are various reasons to imbue robots with ethics (or at least pretend to do so), one is public relations. Thanks to science fiction dating at least back to Frankenstein, people worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics might reassure the public. Another reason is to make the public relations gimmick a reality-to place behavioral restraints on killbots so they will conform to the rules of war (and human morality)."
"While science fiction features ethical robots, the authors (like philosophers) are vague about how robot ethics works. In the case of intelligent robots, their ethics might work the way ours does-which is a mystery debated by philosophers and scientists to this day. While AI has improved thanks to massive processing power, it does not have human-like ethical capacity, so the current practical challenge is to develop ethics for the autonomous or semi-autonomous robots we can build now."
The United States military previously funded research into robots capable of moral reasoning, and other nations likely have similar interests. Science fiction often centers on embedding ethics in robots, ranging from constraint-based designs like Asimov’s laws to uncontrolled killer machines. Early film examples show robots forced to avoid harming humans through built-in mechanisms. Real-world motivations include public relations, since people fear creations may get out of control, and the desire to make ethical promises operational by imposing behavioral restraints aligned with rules of war and human morality. Current work faces a practical challenge because AI lacks human-like ethical capacity, and the mechanisms for robot ethics remain uncertain.
Read at A Philosopher's Blog
Unable to calculate read time
Collection
[
|
...
]