AI's black box problem: Why is it still indecipherable to researchers
Briefly

When a neural network is operational, even the most specialized researchers are often left in the dark about what's happening, as these systems operate as a black box.
While the mathematics underpinning these algorithms is well understood, the behavior generated by the network is not, leaving researchers unable to explain outputs.
In contrast, other AI algorithms, like decision trees or linear regression, are more interpretable, injecting transparency into their decision-making processes.
The very architecture of neural networks hampers transparency, making it essential to visualize interconnected neurons to comprehend their workings.
Read at english.elpais.com
[
|
]