Decolonising AI: A UX approach to cultural biases and power dynamicsAI systems reflect cultural biases embedded in their training data, often marginalizing non-Western perspectives.
Is AI racially biased? Study finds chatbots treat Black-sounding names differentlyChatbots show significant bias based on names associated with race and gender.AI chatbots display biases in various scenarios, disadvantaging Black people and women in most cases.
Elon Musk's xAI Is Exploring a Way to Make AI More Like Donald TrumpNew technique allows measurement and manipulation of AI preferences, potentially aligning them with popular electoral views.
AI Briefing: Falling trust in AI poses a new set of challengesConsumer trust in AI has fallen globally and in the U.S.Developed countries have a higher rejection rate of AI than developing markets.
Is AI racially biased? Study finds chatbots treat Black-sounding names differentlyChatbots show significant bias based on names associated with race and gender.AI chatbots display biases in various scenarios, disadvantaging Black people and women in most cases.
Elon Musk's xAI Is Exploring a Way to Make AI More Like Donald TrumpNew technique allows measurement and manipulation of AI preferences, potentially aligning them with popular electoral views.
AI Briefing: Falling trust in AI poses a new set of challengesConsumer trust in AI has fallen globally and in the U.S.Developed countries have a higher rejection rate of AI than developing markets.
Tesla AI turns everyone into a white man, U.S. official saysTesla's AI misidentifies pedestrians as tall white men, highlighting bias in technology.
OpenAI quietly revises policy doc to remove reference to 'politically unbiased' AI | TechCrunchOpenAI has removed language endorsing politically unbiased AI from its recent policy document, indicating a complex discourse on AI bias.
Grok may soon get an 'Unhinged Mode' | TechCrunchGrok's 'Unhinged Mode' aims to deliver more controversial and edgy responses, aligning with Musk's original vision for the chatbot.
The AI Culture Wars Are Just Getting StartedGoogle had to turn off image-generation capabilities of its AI model, Gemini, due to biases in default depictions.Critics pointed out bias in Google Gemini's responses, including a liberal bias highlighted by conservative voices like Elon Musk.
Tesla AI turns everyone into a white man, U.S. official saysTesla's AI misidentifies pedestrians as tall white men, highlighting bias in technology.
OpenAI quietly revises policy doc to remove reference to 'politically unbiased' AI | TechCrunchOpenAI has removed language endorsing politically unbiased AI from its recent policy document, indicating a complex discourse on AI bias.
Grok may soon get an 'Unhinged Mode' | TechCrunchGrok's 'Unhinged Mode' aims to deliver more controversial and edgy responses, aligning with Musk's original vision for the chatbot.
The AI Culture Wars Are Just Getting StartedGoogle had to turn off image-generation capabilities of its AI model, Gemini, due to biases in default depictions.Critics pointed out bias in Google Gemini's responses, including a liberal bias highlighted by conservative voices like Elon Musk.
How Bias in Medical AI Affects Diagnoses Across Different Groups | HackerNoonBias in medical imaging algorithms leads to disparities in diagnosis between different ethnicities, with existing mitigation strategies failing to eliminate these biases.
She didn't get an apartment because of an AI-generated score and sued to help others avoid the same fateAI screening tools like SafeRent can unfairly deny housing applications without transparency, impacting marginalized communities.
Revealed: bias found in AI system used to detect UK benefits fraudAI system for welfare fraud detection in the UK government shows bias by age, disability, marital status, and nationality.
ChatGPT Messes Up Badly During Demo With CEO of ChanelChatGPT's outputs can reflect outdated gender biases, as demonstrated by Chanel CEO Leena Nair's experience.
AI models get more election questions wrong when asked in Spanish, study shows | TechCrunchAI models show significant bias by providing less accurate information for election-related questions in Spanish compared to English.
For AI Systems To Deliver On Their Potential, DEI Must Be Ingrained - And Enforced | AdExchangerAI training data is inherently biased, affecting inclusivity in AI systems.Courage and commitment to diversity are crucial for an equitable AI ecosystem.
AI is making software bugs weirder and harder to fixAT&T experienced network outage due to a software error, while Google's Gemini chatbot generated controversial images.Generative AI models encounter biases due to flawed training data, requiring retroactive patch solutions.
How Best to Use ChatGPT, Gemini, and Other AI Tools? Our AI Expert Answers Your QuestionsDifferent AI tools serve varying purposes; users should choose based on their specific needs and concerns.AI's use of personal data raises legitimate privacy issues that need careful consideration.The reported biases and inaccuracies in AI systems highlight important concerns, though awareness can aid in effective usage.
Generative AI's refusal to produce 'controversial' content can create echo chambersGenerative AI's use policies often censor content, raising concerns about free speech and adherence to international standards.
ChatGPT Messes Up Badly During Demo With CEO of ChanelChatGPT's outputs can reflect outdated gender biases, as demonstrated by Chanel CEO Leena Nair's experience.
AI models get more election questions wrong when asked in Spanish, study shows | TechCrunchAI models show significant bias by providing less accurate information for election-related questions in Spanish compared to English.
For AI Systems To Deliver On Their Potential, DEI Must Be Ingrained - And Enforced | AdExchangerAI training data is inherently biased, affecting inclusivity in AI systems.Courage and commitment to diversity are crucial for an equitable AI ecosystem.
AI is making software bugs weirder and harder to fixAT&T experienced network outage due to a software error, while Google's Gemini chatbot generated controversial images.Generative AI models encounter biases due to flawed training data, requiring retroactive patch solutions.
How Best to Use ChatGPT, Gemini, and Other AI Tools? Our AI Expert Answers Your QuestionsDifferent AI tools serve varying purposes; users should choose based on their specific needs and concerns.AI's use of personal data raises legitimate privacy issues that need careful consideration.The reported biases and inaccuracies in AI systems highlight important concerns, though awareness can aid in effective usage.
Generative AI's refusal to produce 'controversial' content can create echo chambersGenerative AI's use policies often censor content, raising concerns about free speech and adherence to international standards.
AI hiring test finds bias against men with Anglo-Saxon namesRecent AI models used in mock interviews show bias against men with Anglo-Saxon names.
Is Bias in AI Quantifiable? | HackerNoonBias in AI is complex and multifaceted, woven into data and algorithms, making it difficult to quantify.
'I applied for 15 jobs and heard nothing back - then I asked ChatGPT to write my CV and everything changed'Large language models can reinforce gender stereotypes but may be leveraged by women to navigate these biases.
AIs show distinct bias against Black and female resumes in new studyBiases in hiring persist in AI models, mirroring previous studies on human resume evaluations for race and gender.
'I applied for 15 jobs and heard nothing back - then I asked ChatGPT to write my CV and everything changed'Large language models can reinforce gender stereotypes but may be leveraged by women to navigate these biases.
AIs show distinct bias against Black and female resumes in new studyBiases in hiring persist in AI models, mirroring previous studies on human resume evaluations for race and gender.
AI chatbots use racist stereotypes even after anti-racism trainingLarge language models demonstrate racial prejudice against African American English speakersCommercial AI chatbots show hidden bias which could impact employment and criminal justice decisions.
Covert racism in AI chatbots, precise Stone Age engineering, and the science of paper cutsAI systems like ChatGPT exhibit covert racism by making biased judgments based on the user's dialect, particularly with African American English.
LLMs have a strong bias against use of African American EnglishAI-based chatbots still reflect societal biases, particularly against African American English speakers, despite advancements in their training.
Elon Musk's Criticism of 'Woke AI' Suggests ChatGPT Could Be a Trump Administration TargetAI models exhibit political bias from internet data, affecting their neutrality and reliability especially on contentious issues.
As AI tools get smarter, they're growing more covertly racist, experts findAI language models like ChatGPT and Gemini hold racist stereotypes about AAVE speakers.AI systems react to less overt markers of race, affecting job applicants using AAVE.
AI chatbots use racist stereotypes even after anti-racism trainingLarge language models demonstrate racial prejudice against African American English speakersCommercial AI chatbots show hidden bias which could impact employment and criminal justice decisions.
Covert racism in AI chatbots, precise Stone Age engineering, and the science of paper cutsAI systems like ChatGPT exhibit covert racism by making biased judgments based on the user's dialect, particularly with African American English.
LLMs have a strong bias against use of African American EnglishAI-based chatbots still reflect societal biases, particularly against African American English speakers, despite advancements in their training.
Elon Musk's Criticism of 'Woke AI' Suggests ChatGPT Could Be a Trump Administration TargetAI models exhibit political bias from internet data, affecting their neutrality and reliability especially on contentious issues.
As AI tools get smarter, they're growing more covertly racist, experts findAI language models like ChatGPT and Gemini hold racist stereotypes about AAVE speakers.AI systems react to less overt markers of race, affecting job applicants using AAVE.
OpenAI's VP of global affairs claims o1 is 'virtually perfect' at correcting bias, but the data doesn't quite back that up | TechCrunchOpenAI's new reasoning model o1 shows promise in reducing AI bias through self-evaluation and adherence to ethical guidelines.
Meta AI chatbot heaps praise on Kamala Harris, warns Trump is 'crude and lazy'Meta's AI exhibits bias in political assessments, favoring Kamala Harris over Donald Trump.
Alexa endorses Kamala Harris, refuses to talk about TrumpAmazon's Alexa displayed a political bias by endorsing Kamala Harris while refusing to provide similar support for Donald Trump.
Meta AI chatbot heaps praise on Kamala Harris, warns Trump is 'crude and lazy'Meta's AI exhibits bias in political assessments, favoring Kamala Harris over Donald Trump.
Alexa endorses Kamala Harris, refuses to talk about TrumpAmazon's Alexa displayed a political bias by endorsing Kamala Harris while refusing to provide similar support for Donald Trump.
Microsoft is turning to AI to make its workplace more inclusiveDiversity and workforce investment are crucial to mitigate AI bias, according to Microsoft's chief diversity officer.
Absolut vodka's attempt to fix AI is kind of weirdAI-generated fashion images promoting diversity to counter biases in AI image generation.
ChatGPT overwhelmingly depicts financiers, CEOs as men - and women as secretaries: studyArtificial intelligence image creators, like ChatGPT's DALL-E, show gender bias by predominantly depicting men as businesspeople and CEOs over women, highlighting the importance of diversity and monitoring in AI development.
Microsoft is turning to AI to make its workplace more inclusiveDiversity and workforce investment are crucial to mitigate AI bias, according to Microsoft's chief diversity officer.
Absolut vodka's attempt to fix AI is kind of weirdAI-generated fashion images promoting diversity to counter biases in AI image generation.
ChatGPT overwhelmingly depicts financiers, CEOs as men - and women as secretaries: studyArtificial intelligence image creators, like ChatGPT's DALL-E, show gender bias by predominantly depicting men as businesspeople and CEOs over women, highlighting the importance of diversity and monitoring in AI development.
Is There a Silver Lining to Algorithm Bias? Pt. 1Managing algorithm bias in AI remains a complex challenge, as demonstrated by the unintended consequences of Google's Gemini chatbot's inclusive efforts.
People game AIs via game theoryThe challenge of preventing AI bias from human trainers is heightened when the trainers are aware their behavior may influence future AI decisions.
View: How to future-proof AI regulationReversal of controversial provision in AI advisory by MeitY.Structural lessons on aligning India's AI regulation with global tech race.
As a tech CEO I asked diverse female interns to expose biases in our AI chatbot. Here's what followedThe lack of diversity in AI development can perpetuate outdated stereotypes and limit innovation.
View: How to future-proof AI regulationReversal of controversial provision in AI advisory by MeitY.Structural lessons on aligning India's AI regulation with global tech race.
As a tech CEO I asked diverse female interns to expose biases in our AI chatbot. Here's what followedThe lack of diversity in AI development can perpetuate outdated stereotypes and limit innovation.
ChatGPT chooses white man as most likely person in a high-powered jobAI chatbots like ChatGPT show bias by predominantly generating images of men in high-powered job roles.
AI Recruiters Have Joined the Job Search. Who Are They Helping?AI recruitment tools like the Jennie Johnson bot are increasingly used by companies, potentially leading to a dehumanizing job market experience for job seekers.
The AI Revolution Is Crushing Thousands of LanguagesThe dominance of English in technology like AI can lead to cultural biases and exclusion of languages and communities.
Meta's AI image generator won't create photo of Asian dating white person - even though Mark Zuckerberg is married to Priscilla ChanMeta's AI tool Imagine refuses to generate images of interracial couples, highlighting biases in AI systems.Critics argue that AI image generators reflect biases of software engineers who program them.
AI image generators often give racist and sexist results: can they be fixed?Image-generating AI programs can perpetuate biases and stereotypes.Research shows AI tools like Stable Diffusion and DALL·E exhibit biases in generated images.
How we can make AI less biased against disabled peopleAI models exhibit significant disability biasAlgorithmic systems can lead to unfair discrimination against people with disabilities
Creatives can help train sexism out of AIDatasets used to train AI contain inherent biasHeuristics and cognitive biases impact AI's decision-making
We definitely messed up': why did Google AI tool make offensive historical images?Gemini AI model by Google faced criticism for biased image generationIssues with prompt injection in AI models like Gemini
Fox's Tyrus Admits I Kinda Dig Seeing a Black George Washington' During Discussion on Biased' AI ProgramAI bias in historical depictionsDebate over Google's Gemini program and diversity in AI
Mitigating the Risk of AI Bias in Cyber Threat DetectionAddressing AI bias is crucial to prevent cybersecurity oversights and identify data-poisoning threats from threat actors.