The article explores concerns surrounding the use of automated systems like AutoRIF in federal worker layoffs, highlighting fears of bias and a lack of transparency. Kunkler emphasizes that these systems can obscure reasons for firings and potentially violate workers' rights. Notably, Don Moynihan notes that automating flawed assumptions can significantly amplify errors. To protect workers, Kunkler advocates for union support and legislative intervention to regulate these opaque tools, emphasizing the need for rigorous testing and transparency in decision-making processes affecting employment.
It is not clear how AutoRIF has been modified or whether AI is involved in the RIF mandate (through AutoRIF or independently)," Kunkler wrote. "However, fears of AI-driven mass-firings of federal workers are not unfounded. Elon Musk and the Trump Administration have made no secret of their affection for the dodgy technology and their intentions to use it to make budget cuts. And, in fact, they have already tried adding AI to workforce decisions.
There is often no insight into how the tool works, what data it is being fed, or how it is weighing different data in its analysis," Kunkler said. "The logic behind a given decision is not accessible to the worker and, in the government context, it is near impossible to know how or whether the tool is adhering to the statutory and regulatory requirements a federal employment tool would need to follow.
The situation gets even starker when you imagine mistakes on a mass scale. Don Moynihan, a public policy professor at the University of Michigan, told Reuters that "if you automate bad assumptions into a process, then the scale of the error becomes far greater than an individual could undertake.
The only way to shield workers from potentially illegal firings, Kunkler suggested, is to support unions defending worker rights while pushing lawmakers to intervene.
Collection
[
|
...
]