
"Researchers took a stripped-down version of GPT-a model with only about two million parameters-and trained it on individual medical diagnoses like hypertension and diabetes. Each code became a token, like a word in the sentence of a prompt, and each person's medical history became a story unfolding over time. For a little context, GPT-4 and GPT-5 are believed to have hundreds of billions to trillions of parameters, making them hundreds of thousands of times larger than this small model."
"It's interesting to note that clinicians already think this way. A 50-year-old with hypertension might not alarm a doctor, but add diabetes and chronic kidney disease, and the physician starts to see the arc of possible futures that may include heart failure, dialysis, and even premature death. What this new model does is formalize and scale that intuition. It has seen hundreds of thousands of similar "patients" and knows, statistically, how their stories usually unfold and when."
A tiny LLM with roughly two million parameters was trained on individual diagnosis codes, treating each code as a token and each patient's medical history as a sequential narrative. The model learned statistical patterns across hundreds of thousands of patient trajectories and could forecast next diagnoses, complications, and timing of death. Clinician intuition about disease progression was formalized and scaled by the model's ability to simulate likely futures and the timing of events. The approach reframes diagnoses as a 'grammar of disease' that can be read as predictive stories. Accurate foresight offers clinical utility but also creates emotional and ethical burdens around knowledge of future outcomes.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]