
"The model - detailed in a preprint on the arXiv server last month - is not readily comparable to an LLM. It is highly specialized, excelling only on the type of logic puzzles on which it is trained, such as sudokus and mazes, and it doesn't 'understand' or generate language. But its ability to perform so well on so few resources - it is 10,000 times smaller than frontier LLMs - suggests a possible route for boosting this capability more widely in AI, say researchers."
"A test of artificial intelligence "The results are very significant in my opinion," says François Chollet, co-founder of AI firm Ndea, who created the ARC-AGI test. Because such models need to be trained from scratch on each new problem, they are "relatively impractical", but "I expect a lot more research to come out that will build on top of these results", he adds."
The Tiny Recursive Model (TRM) is a small-scale AI that learns from limited data and excels at visual logic puzzles such as sudokus and mazes. TRM outperformed some leading large language models on the ARC-AGI benchmark, which measures visual reasoning. The model is specialized, does not understand or generate language, and is about 10,000 times smaller than frontier LLMs. TRM requires training from scratch for each new problem, making it relatively impractical for general use. The model's efficiency suggests alternative architectures could boost reasoning capabilities if adapted or scaled appropriately.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]