
"AI models are filled to the brim with bias, whether that's showing you a certain race of person when you ask for a pic of a criminal or assuming that a woman can't possibly be involved in a particular career when you ask for a firefighter. To deal with these issues, Sony AI has released a new dataset for testing the fairness of computer vision models, one that its makers claim was compiled in a fair and ethical way."
""A common misconception is that because computer vision is rooted in data and algorithms, it's a completely objective reflection of people," explains Alice Xiang, global head of AI Governance at Sony Group and lead research scientist for Sony AI, in a video about the benchmark release. "But that's not the case. Computer vision can warp things depending on the biases reflected in its training data.""
Sony AI released the Fair Human-Centric Image Benchmark (FHIBE), a consensually collected, globally diverse fairness evaluation dataset for a wide range of human-centric computer vision tasks. Computer vision systems frequently reflect biases present in their training data and can produce distorted or harmful outputs, such as associating certain races with criminality or mislabeling occupations by gender. Reported issues include facial recognition systems erroneously allowing family members to unlock phones and make payments in China, potentially due to insufficient representation of Asian faces or undetected model bias. Bias in vision systems can lead to wrongful arrests, security breaches, and other harms; FHIBE aims to enable testing and mitigation.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]