"While many have been discussing the privacy risks of people following the ChatGPT caricature trend, the prompt reveals something else alarming - people are talking to their LLMs about work," said Josh Davies, principal market strategist at Fortra, in an email to eSecurityPlanet. He added, "If they are not using a sanctioned ChatGPT instance, they may be inputting sensitive work information into a public LLM. Those who publicly share these images may be putting a target on their back for social engineering attempts, and malicious actors have millions of entries to select attractive targets from."
Thomas Ristenpart, professor of computer science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science, has received the Association for Computing Machinery Conference on Computer and Communications Security ( ACM CCS) Test of Time Award for his influential 2015 paper on privacy risks in machine learning. The paper, "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures," was co-authored with Matt Fredrikson, associate professor at Carnegie Mellon University, and Somesh Jha, professor at the University of Wisconsin-Madison.
Sensitive data loss episodes can have reputational, financial, legal, and regulatory consequences. CISOs need to have their data leakage defences and best practices in place.