"Thomas Ristenpart, professor of computer science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science, has received the Association for Computing Machinery Conference on Computer and Communications Security ( ACM CCS) Test of Time Award for his influential 2015 paper on privacy risks in machine learning. The paper, "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures," was co-authored with Matt Fredrikson, associate professor at Carnegie Mellon University, and Somesh Jha, professor at the University of Wisconsin-Madison."
"The award recognizes research that has had a lasting impact on the field of computer security and privacy. The paper was among the first to show how machine learning models - especially those made available through online services - can inadvertently leak sensitive information. Read more on the Cornell Tech website."
An influential 2015 research demonstration revealed privacy risks in machine learning by showing that models accessible via online services can leak sensitive training data. Attackers can perform model inversion attacks that exploit reported confidence scores to reconstruct private information about training inputs. The research identified practical scenarios where deployed models expose sensitive attributes and proposed basic countermeasures to reduce information leakage. The findings prompted broader attention to model privacy and influenced subsequent work on safer deployment practices, privacy-preserving model design, and evaluation of leakage risks. Recognition from the field affirmed the long-term impact on computer security and privacy communities.
Read at Cornell Chronicle
Unable to calculate read time
Collection
[
|
...
]