Algorithmic decision-making (ADM) technologies are being increasingly adopted by landlords, employers, and police in 2024, presenting significant risks to personal freedom and access to basic needs.
The EFF warns that ADMs often rely on biased data, perpetuating historical injustices and making it difficult to challenge their outputs, particularly when they lack transparency.
Decision-makers often use ADM as cover for their biases, making the process of adopting these technologies resemble a simple procurement rather than a significant public policy decision.
While some machine learning applications can be beneficial, using these technologies to make decisions about individuals is considered one of the most problematic applications.
Collection
[
|
...
]