Who's to blame when AI goes awry? Medieval theology can help decide
Briefly

AI systems such as self-driving taxis exist in a moral grey area: while they execute actions based on their programming, they lack the capacity to shape their own values.
The concept of rational agency indicates that moral responsibility can apply to actions that are predetermined, yet self-driving taxis do not possess the required capabilities to be deemed moral agents.
We are left with a responsibility gap in assessing the moral implications of AI actions, as traditional frameworks fall short in addressing autonomous decision-making scenarios.
Exploring historical philosophical ideas might provide a more suitable understanding of the moral complexities surrounding AI and ethical accountability in modern contexts.
Read at Fast Company
[
]
[
|
]