
""Prediction models trained on provenance-unknown data have no place in clinical decision-making. They are intrinsically unreliable," says Soumyadeep Bhaumik, a public-health researcher at the George Institute for Global Health in Sydney, Australia."
""It was an enormous surprise to come across something like that," Barnett says, referring to the oddities found in the data sets used for training AI models."
Researchers found that artificial intelligence models predicting stroke and diabetes risk are trained on questionable data sets. An analysis of 124 peer-reviewed papers revealed oddities suggesting potential data fabrication. Some models have been used in hospitals, raising concerns about their reliability. Experts emphasize that models trained on unknown data sources are unreliable and can lead to incorrect clinical decisions. There is a call for stricter data source disclosure requirements from researchers and journals to ensure the integrity of medical AI applications.
#ai-in-healthcare #data-integrity #stroke-risk-prediction #diabetes-risk-prediction #clinical-decision-making
Read at Nature
Unable to calculate read time
Collection
[
|
...
]