Large language models also work for protein structures
Briefly

The success of ChatGPT and its competitors is based on what's termed emergent behaviors.These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware); they were simply tasked with tracking the statistics of word usage.But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts.
Read at Ars Technica
[
]
[
|
]