
"AI-driven tools drive intrusive surveillance and generate unreliable risk assessments that can wrongly flag individuals as threats. These are both technical failures and human rights failures, with discriminatory outcomes baked into the design and deployment of systems that affect people's liberty, fair trial rights, and access to remedy."
Artificial intelligence deployment in counter-terrorism operations through predictive risk scoring and automated surveillance raises critical human rights concerns. AI-driven systems pose dual threats: they enable intrusive surveillance capabilities and produce unreliable risk assessments that incorrectly identify individuals as security threats. These failures represent both technical deficiencies and human rights violations, with discriminatory outcomes embedded in system design and implementation. The consequences directly impact individuals' liberty, fair trial rights, and access to legal remedies. Privacy International highlighted these dangers at a UN event discussing the Special Rapporteur on Counter-Terrorism and Human Rights' position paper examining AI's human rights risks in counter-terrorism contexts.
#ai-and-counter-terrorism #human-rights-violations #surveillance-and-privacy #algorithmic-discrimination #due-process
Read at Privacy International
Unable to calculate read time
Collection
[
|
...
]