AI2 open sources text-generating AI models -- and the data used to train them | TechCrunch
Briefly

Called OLMo, an acronym for 'Open Language MOdels,' the models and the data set used to train them, Dolma - one of the largest public data sets of its kind - were designed to study the high-level science behind text-generating AI, according to AI2 senior software engineer Dirk Groeneveld.
Groeneveld makes the case that many of these models can't really be considered open because they were trained 'behind closed doors' and on proprietary, opaque sets of data.
Read at TechCrunch
[
add
]
[
|
|
]