Uncle Sam wants AI models that can detect healthcare fraud
Briefly

CMS is launching a competition to develop explainable AI and ML models that identify anomalous patterns in Medicare claims and translate those patterns into indicators of potential fraud. Models must not only detect anomalies but also explain the underlying factors driving them so similar schemes can be proactively flagged across claims data. Transparent decision processes are required to allow program integrity teams to understand how conclusions are reached and assess model validity for legal or enforcement contexts. The initial phase requires submission of a structured white paper describing model design, outputs, and how explainability will be achieved and used.
Instead, the competition is seeking "market-based research" projects that make use of AI and ML models to "identify anomalous patterns within Medicare claims data, translate detected patterns into indicators of fraud, and propose scalable analytic and policy solutions to address the identified indicators." However, the project isn't stopping there. "Pattern detection alone is not sufficient to determine, let alone prove, fraudulent behavior, especially in legal or enforcement contexts," CMS said in the overview of the challenge, explaining that's why it's looking deeper than just identifying fraud patterns.
Of course, since it's asking for an explainable AI, CMS wants models that have transparent thought processes so it can understand how they're arriving at their conclusions - a key part of determining whether the AI models are actually able to identify those underlying factors. CMS is leaving it relatively open to participants to determine how best to meet the cook-off objectives, only noting that the first phase of the program involves submission of a white paper.
Read at Theregister
[
|
]