Weekly AI recap: Musk sues OpenAI, Google (sorta) apologizes for Gemini controversy
Elon Musk filed a lawsuit against OpenAI and Sam Altman for prioritizing financial interests over the original mission of creating beneficial AI. [ more ]
Google's internal AI ethics watchdog, known as RESIN, has lost its leader and is being restructured.
The team's role was to review internal projects for compatibility with Google's AI principles and ensure responsible development and use of AI technology. [ more ]
OpenAI is exploring the possibility of allowing its AI to generate NSFW content in appropriate contexts, considering user and societal expectations. [ more ]
Inside Higher Ed | Higher Education News, Events and Jobs
The next AI winter could be caused by users' trust issues-and 'mindful friction' can keep it from happening
AI in the workplace should act as a co-pilot, not on autopilot, with a need for next-level controls and designing products that consider both AI capabilities and human judgment. [ more ]
Mistral CEO Says AI Companies Are Trying to Build God
The CEO of Mistral, a new AI firm, rejects the notion of creating artificial general intelligence (AGI) equating it to a desire to create God. [ more ]
International Writing Guilds Set Out 'Ethical Framework' for Use of AI in Screenwriting
Professional screenwriters guilds are working on creating an ethical framework for AI use in screenwriting.
Key principles include maintaining writers' creative authority, transparency in AI use, consent for training AI on writers' work, and fair compensation. [ more ]
OpenAI Secretly Trained GPT-4 With More Than a Million Hours of Transcribed YouTube Videos
OpenAI's text-to-video generator, Sora, may have been trained on YouTube videos, raising concerns about data sourcing and copyright infringement.
AI companies like OpenAI are utilizing large amounts of potentially murky data for training models, leading to legal challenges regarding fair compensation and copyright infringement. [ more ]
AI-generated legal outputs often contain errors and falsehoods, leading to real-world consequences.
Hallucination, where AI models produce responses that don't align with reality, poses a significant challenge in the use of large language models. [ more ]
Arvida Byström's artwork explores womanhood in the context of the internet, challenging societal norms.
Her latest project involves creating pornographic self-portraits using an ethically questionable AI tool and selling them as real on a subscription site. [ more ]
AI has opposing factions - doomers fearing disaster, accelerationists predicting abundance, not representative of whole industry.
AI ethicists work to mitigate harm from AI, critique racist biases in predictive policing, and school algorithms. Doomers overlook existing harms. [ more ]
A pink slime site used AI to rewrite our AI ethics article - Poynter
Artificial intelligence likely wrote a near-identical article to Poynter's AI ethics guide shortly after its release on a sketchy website.
The Tech Gate article copied Poynter's content with minor changes, likely generated by AI, impacting the credibility of original news sources. [ more ]
Should artists be paid for training data? OpenAI VP wouldn't say | TechCrunch
Artists whose work was used to train AI like ChatGPT may not be compensated due to legal complexities and fair use arguments by companies like OpenAI. [ more ]
Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems
A newly-developed method to detect and remove potentially dangerous knowledge from AI models has been introduced.
Experts from various fields collaborated to develop a set of questions to evaluate whether AI models could contribute to creating and deploying weapons of mass destruction. [ more ]
Act now on AI before it's too late, says UNESCO's AI lead
The second Global Forum on the Ethics of AI organized by UNESCO is focused on broadening the conversation around AI risks and considering AI's impacts beyond those discussed by first-world countries and business leaders.
UNESCO aims to move away from just having principles on AI ethics and focus on practical implementation through the Readiness Assessment Methodology (RAM) to measure countries' commitments. [ more ]
Etching AI Controls Into Silicon Could Keep Doomsday at Bay
Researchers are exploring ways to encode rules governing the training and deployment of AI algorithms directly into computer chips.
This approach could prevent rogue nations or irresponsible companies from developing dangerous AI.
Using trusted components in chips or etching new ones could limit access to computing power and require licenses for the deployment of powerful AI systems. [ more ]
OpenAI went back on a promise to make key documents public
OpenAI, founded by tech entrepreneurs including Elon Musk, has changed its policy on transparency and no longer provides internal documents to the public.
The change in policy comes after the recent turmoil within OpenAI, including the firing and subsequent reinstatement of CEO Sam Altman. [ more ]
ChatGPT and GPT-4, AI models developed by OpenAI, have passed several legal exams, including the Multistate Bar Examination and the Multistate Professionals Exam.
Law schools must adapt to the presence of AI by incorporating it into their curriculum and understanding its limitations and risks in the practice of law. [ more ]