Medieval theology has an old take on a new problem AI responsibility
Briefly

As a society, we face a conundrum: it seems that no one, or no one thing, is morally responsible for the AI's actions - what philosophers call a responsibility gap. Present-day theories of moral responsibility simply do not seem appropriate for understanding situations involving autonomous or semi-autonomous AI systems.
However, these kinds of AI systems are essentially deterministic: Their behavior is dictated by their code and the incoming sensor data, even if observers might struggle to predict that behavior.
Perhaps the taxi itself should be praised and blamed. According to many modern philosophers, rational agents can be morally responsible for their actions, even if their actions were completely predetermined - whether by neuroscience or by code.
Read at The Conversation
[
]
[
|
]