In the darker corners of the tech industry, an untold number of professionals have come to believe in a controversial theory known as Roko's basilisk, which holds that a future AI super intelligence would torture any human who didn't help it come into existence. Now, in a twist that should please any writer, the guy who's spearheading Meta's AI-powered smart glasses is named Rocco Basilico - which has delighted and freaked out some online observers.
Imagine an avid reader who one day flips through a summer book preview in their local paper. Among the books listed there is a novel by one of this reader's favorite writers, Isabel Allende. Intrigued, this reader heads to their local library to see if they have any copies of the novel, called Tidewater Dreams, in stock. Here's the problem: Tidewater Dreams doesn't actually exist; instead, it was part of an AI-generated article that included several nonexistent books by acclaimed
The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior." Language models are primarily evaluated using exams that penalize uncertainty. The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer.
Each citation, each argument, each procedural decision is a mark upon the clay, an indelible impression. [I]n the ancient libraries of Ashurbanipal, scribes carried their stylus as both tool and sacred trust-understanding that every mark upon clay would endure long beyond their mortal span.
The pilot scheme allows AI chatbots to generate community notes to accelerate the speed and scale of Community Notes on X.