"I think that the scaling hypothesis landscape is much more multidimensional and we can scale multiple different things," says OpenAI researcher Jerry Tworek, whose research in recent years has focused on AI models that can "think" about different approaches to solving complex problems, rather than relying mostly on what they learned in their pre-training to generate an answer. This indicates a significant shift in how AI might evolve beyond simple learning to more adaptive reasoning.
Tworek led the effort at OpenAI to develop the first major model to prove that the new approach works—"o1." At the end of August OpenAI's "o1-preview" model rose to the top of the LiveBench leaderboard, which ranks the intelligence of large frontier models. This achievement showcases the effectiveness of new, innovative approaches in AI model development, emphasizing the importance of complex reasoning over speed.
"What we managed to train our models to do is this very natural way of reasoning," Tworek says. "It looks a little bit more human. It is the model trying things in a very fluid, intelligent fashion." This quote highlights the shift towards creating models that mimic human-like reasoning processes, which could be pivotal in achieving advancements towards artificial general intelligence.
Collection
[
|
...
]