Former NSA chief Mike Rogers emphasizes the importance of embedding safety and security into AI models from the outset, rather than adding them later. During a panel at the Vanderbilt Summit on AI and national security, he reflected on lessons from early cybersecurity practices, underscoring that neglecting core design principles like defensibility and resilience in our hyper-connected world has left systems vulnerable. The risks of insecure AI models include data leaks, biases in decision-making, and potentially severe consequences in critical fields, such as healthcare. Planning for these issues ahead of time is essential.
AI engineers should take a lesson from the early days of cybersecurity and bake safety and security into their models during development, rather than trying to bolt it on after the fact.
So when we created this hyper-connected, highly networked world ... we did not consider defensibility, redundancy, and resilience as core design characteristics.
Fifty years later, we find ourselves in a very different environment...we just didn't build it into the system.
It's better to plan for and mitigate these flaws now than trying to fix them after the fact.
Collection
[
|
...
]