Gay CEO of OpenAI unveils plan to prevent widespread fraud in 2024 elections
Briefly

The company's three-point plan seeks to prevent abuse - such as people generating misleading 'deepfake' images and audio or applications intended to influence people by impersonating candidates - make it easier for people to detect AI-generated content, and ensure that users have access to accurate election information.
To stop such misuse of their technology, OpenAI said it isn't allowing people to build applications based on its technology for political campaigning and lobbying. This includes not allowing chatbots that pretend to be real people (like candidates) or institutions (like local governments and election boards), or not allowing the creation of OpenAI-based applications that discourage people from voting or lie about processes and qualifications.
Users will be able to report such chatbots to OpenAI for deactivation. Concurrently, OpenAI said its ChatGPT app will increasingly generate responses based on real-time reporting that is attributed and linked to well-trusted news sources so people can understand where they're getting information from. ChatGPT will also direct users' questions about voting with official election information.
Read at LGBTQ Nation
[
]
[
|
]