Generative AI is moving from drafting emails to shaping labor markets. On platforms like Fiverr, Freelancer.com, and Upwork, millions of workers rely on hourly rates to compete for jobs. As AI increasingly influences pricing recommendations, business leaders face a critical question: Do large language models (LLMs) make these pricing decisions fairly? Or do they perpetuate the same biases and inequities that have long plagued human labor markets?
"I scraped millions of Google Maps restaurant reviews, and gave each reviewer's profile picture to an AI model that rates how hot they are out of 10," says San Francisco-based website creater Riley Walz. "This map shows how attractive each restaurant's clientele is. Red means hot, blue means not."
"The findings, detailed in a preprint paper titled "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights," further amplify persistent concerns about AI bias. Authors Jiannan Xu, a PhD candidate at the University of Maryland, Gujie Li, assistant professor at the National University of Singapore, and Jane Yi Jiang, assistant professor at Ohio State University, found that "LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled."
The goal of the Ethical AI Guidebook is to spark a mindset shift-encouraging content creators, agencies, and AI developers to make inclusivity a core part of how they generate and use imagery.
As opposed to investigating deepfake detection models, the study looks more broadly at AI models used for "fake news detection." This highlights systemic biases in technology.