A.I.'s Black Boxes Just Got a Little Less Mysterious
Briefly

One of the weirder, more unnerving things about today's leading artificial intelligence systems is that nobody not even the people who build them really knows how the systems work. Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language.
One consequence of building A.I. systems this way is that it's difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. When large language models do misbehave, nobody can really explain why.
The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.
Read at www.nytimes.com
[
|
]