Apple releases OpenELM, a slightly more accurate LLM
Briefly

Apple's claim to openness is substantial, releasing not just the model but its training and evaluation framework, providing complete transparency in training the language model on publicly available datasets.
OpenELM's divergence from academic norms includes excluding email addresses of authors and the software release not being under a recognized open-source license, allowing Apple to retain rights if derivative works infringe.
OpenELM's innovative use of layer-wise scaling optimizes parameter allocation in transformer models, leading to enhanced accuracy demonstrated in benchmark tests.
Read at Theregister
[
add
]
[
|
|
]