
"Allegations have flown about the ways AI companies can match, then surpass, the performance of their competitors-particularly how Chinese AI firms can release models rivaling American ones within days or weeks. Google has long complained about the risks of distillation, where companies pepper models with prompts designed to extract internal reasoning patterns and logic by generating massive response datasets, which are then used to train cheaper clone models."
"The shift is happening both in the speed of releases and the nature of the improvements. Longpre argues that the frontier gap between the best closed models and open-weight alternatives is decreasing drastically. "The gap between that and fully open-source or open-weight models is about three to six months," he explains, pointing to research from the nonprofit research organization Epoch AI tracking model development."
AI breakthroughs are occurring on a timescale of weeks as companies rapidly release new models and rivals produce similar offerings. Anthropic released Opus 4.6, followed within a week by Z.ai’s GLM-5, which some observers compared to Opus. Models are quickly downloaded, compressed, and re-released to run locally without internet access. Distillation techniques and heavy prompting are used to extract model behavior and train cheaper clones. Allegations include prompting large models tens of thousands of times to replicate capabilities. The effective gap between top closed models and open-weight alternatives is shrinking to roughly three to six months.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]