ChatGPT denied 250,000 requests to generate deepfake images of candidates in the month before the election
Briefly

OpenAI reported that in the month leading up to Election Day, ChatGPT rejected over 250,000 requests aimed at generating deepfake images of presidential candidates. This represents a proactive approach to maintain the integrity of the electoral process amidst rising concerns about AI-generated misinformation, especially during a pivotal moment in American democracy.
In an effort to ensure voters receive accurate information, ChatGPT guided users toward CanIVote.org for queries related to voting logistics, directing approximately one million inquiries to the site before the election. This represents a significant step in leveraging AI technology to promote civic engagement and safeguard against misinformation.
Facing growing fears of AI being misused during the election, OpenAI's measures demonstrate a commitment to accountability, with ChatGPT aimed at limiting the spread of deepfakes or misleading content. With the rejection of tens of thousands of requests, the company has highlighted its 'guardrails' designed to minimize the risk of harmful AI outputs.
On Election Day, ChatGPT further enhanced information integrity by directing user inquiries regarding election results to credible news organizations like the Associated Press and Reuters. This initiative ensures that the public receives reliable updates, reaffirming the role of responsible information dissemination in maintaining democratic processes.
Read at Business Insider
[
|
]