
"AI deepfakes are already becoming too convincing to be easily spotted by common sense approaches. Malicious actors are using AI to find vulnerabilities and to make their attacks harder to detect. And AI systems themselves pose security risks. Research by Foundry shows that security and privacy are the most pressing ethical issues around generative AI deployments. Down the road, quantum computing promises immense power and capabilities for businesses, but it will also be used by adversaries, especially to break encryption."
"According to Martin Krumböck, CTO for cybersecurity at T-Systems, security teams can form a clearer view of emerging threats, by dividing them into three timescales, or "horizons". "There's always something changing in security," he says. Classical infrastructure security is in the "here and now", and an immediate priority. And too many enterprises still have gaps in cloud security and are not yet ready for AI. "We are seeing very quick business adoption of AI," Krumböck explains. "At the same time, people are ignoring the risks. " "
Enterprises face evolving cyber threats from multiple emerging technologies, requiring robust security and privacy measures. AI generates convincing deepfakes, enables attackers to find vulnerabilities, and introduces risks through compromised training data, prompt injection, and direct model attacks. Generative AI raises pressing ethical concerns around security and privacy. Quantum computing will enable adversaries to break current encryption methods. Laboratory technologies such as DNA-based data storage, cybernetics, and bio-hacking create additional security and data protection challenges. Security teams can categorize threats into three horizons: immediate infrastructure, near-term rapid AI adoption, and long-term advanced technology risks, guiding prioritization and preparedness.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]