AI safety conversations largely overlook the serious risks of AI-enabled military systems violating human rights and lacking safety engineering standards.
There is a critical need to assess the current risks and failure modes of AI armaments like Lavender and Gospel, which have already been involved in civilian massacres.
Israeli startups exporting 'battle-tested' AI armaments suggest a worrisome trend of widespread use, with Gaza potentially being a testing ground for these lethal autonomous systems.
Collection
[
|
...
]