Unnamed OpenAI researchers indicated that the upcoming model 'Orion' is not reliably better than its predecessor, showing smaller performance leaps than between GPT-3 and GPT-4.
Ilya Sutskever expressed concerns that we may be back at a plateau for scaling language models, indicating that new methods of training are now crucial for future advancements.
Sutskever emphasized that the focus has shifted from scaling as the primary method of improvement to the necessity of finding 'the next thing' that could spark true innovation.
Experts noted a significant factor hindering LLM advancements is the diminishing availability of high-quality textual data, as much of the easy-to-access material has already been utilized.
Collection
[
|
...
]