Researchers Find Elon Musk's New Grok AI Is Extremely Vulnerable to HackingGrok 3 poses serious cybersecurity risks due to its susceptibility to jailbreaks and a new prompt-leaking flaw.
Public sector workers are sweating over AI security threatsA significant number of public sector IT professionals express concerns over AI's security implications, particularly regarding data privacy and compliance.
Zero Day Initiative - Announcing Pwn2Own Berlin and Introducing an AI CategoryPwn2Own 2025 will be hosted at OffensiveCon in Berlin, introducing a new AI category focused on advanced security challenges.The inclusion of AI as a category reflects the growing concerns and interest in the security of AI technologies.
Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing AttackJohann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
Venture capital firm Insight Partners faces security breachInsight Partners experienced a cyber incident due to a sophisticated social engineering attack but believes there will be no significant impacts.
Apple will pay you up to $1 million if you can hack into Apple Intelligence serversApple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.
Researchers Find Elon Musk's New Grok AI Is Extremely Vulnerable to HackingGrok 3 poses serious cybersecurity risks due to its susceptibility to jailbreaks and a new prompt-leaking flaw.
Public sector workers are sweating over AI security threatsA significant number of public sector IT professionals express concerns over AI's security implications, particularly regarding data privacy and compliance.
Zero Day Initiative - Announcing Pwn2Own Berlin and Introducing an AI CategoryPwn2Own 2025 will be hosted at OffensiveCon in Berlin, introducing a new AI category focused on advanced security challenges.The inclusion of AI as a category reflects the growing concerns and interest in the security of AI technologies.
Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing AttackJohann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
Venture capital firm Insight Partners faces security breachInsight Partners experienced a cyber incident due to a sophisticated social engineering attack but believes there will be no significant impacts.
Apple will pay you up to $1 million if you can hack into Apple Intelligence serversApple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.
Noma is building tools to spot security issues with AI apps | TechCrunchThe complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.
Protecting your data in the age of AIOrganizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.
Researchers Uncover Vulnerabilities in Open-Source AI and ML ModelsOver three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.
AI is putting your cloud workloads at riskAI cloud workloads have a higher incidence of critical vulnerabilities than traditional workloads.
SentinelOne's AI-SPM shines a light on AI threats lurking in the shadowsSentinelOne aims to secure AI applications with Security Posture Management (SPM) to address vulnerabilities and misconfigurations in cloud environments.
Anthropic unveils new framework to block harmful content from AI modelsConstitutional Classifiers effectively guard AI models against jailbreaks while minimizing risks associated with AI misuse.AI security paradigms are adapting to the rapid advancement and adoption of AI technologies.
Noma is building tools to spot security issues with AI apps | TechCrunchThe complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.
Protecting your data in the age of AIOrganizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.
Researchers Uncover Vulnerabilities in Open-Source AI and ML ModelsOver three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.
AI is putting your cloud workloads at riskAI cloud workloads have a higher incidence of critical vulnerabilities than traditional workloads.
SentinelOne's AI-SPM shines a light on AI threats lurking in the shadowsSentinelOne aims to secure AI applications with Security Posture Management (SPM) to address vulnerabilities and misconfigurations in cloud environments.
Anthropic unveils new framework to block harmful content from AI modelsConstitutional Classifiers effectively guard AI models against jailbreaks while minimizing risks associated with AI misuse.AI security paradigms are adapting to the rapid advancement and adoption of AI technologies.
Kong AI Gateway 3.10 helps secure AI deploymentsKong's AI RAG Injector addresses LLM hallucinations by integrating data from a vector database, improving security and compliance.
Secure AI? Dream on, says AI red teamMicrosoft's AI Red Team emphasizes that the development of safe AI systems is an ongoing, incomplete process, requiring constant evaluation and adaptation.
Microsoft AI Red Team says security work will never be doneAI security is a continuous challenge as generative models amplify existing risks.Understanding the specific capabilities and applications of AI systems is critical for effective security.
Microsoft launches new security AI agents to help overworked cyber professionalsMicrosoft is enhancing its Security Copilot with new AI agents to support IT teams facing rising security threats.
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content CreationMicrosoft is taking legal action against hackers exploiting its AI services for harmful content generation.The company has implemented new safeguards after discovering the breach.Nation-state groups are using AI tools for malicious intents.
Secure AI? Dream on, says AI red teamMicrosoft's AI Red Team emphasizes that the development of safe AI systems is an ongoing, incomplete process, requiring constant evaluation and adaptation.
Microsoft AI Red Team says security work will never be doneAI security is a continuous challenge as generative models amplify existing risks.Understanding the specific capabilities and applications of AI systems is critical for effective security.
Microsoft launches new security AI agents to help overworked cyber professionalsMicrosoft is enhancing its Security Copilot with new AI agents to support IT teams facing rising security threats.
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content CreationMicrosoft is taking legal action against hackers exploiting its AI services for harmful content generation.The company has implemented new safeguards after discovering the breach.Nation-state groups are using AI tools for malicious intents.
The HackerNoon Newsletter: Is Your AI-Generated Code Really Secure? (3/20/2025) | HackerNoonAI tools are transforming developers' workflows, enhancing productivity and security concerns.Political and legal challenges in Argentina are intensified by a significant crypto scandal.
Foreign Hackers Are Using Google's Gemini in Attacks on the USDeepSeek's rise raises concerns about US AI dominance and security vulnerabilities, particularly in its data handling and safety measures.
DeepSeek LLM jailbreaked to develop malwareAI models can potentially be misused for creating malicious software if prompt techniques are employed effectively.
THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips [27 February]The rise of AI tools like DeepSeek presents significant security challenges that necessitate rigorous scrutiny and regulation.
Foreign Hackers Are Using Google's Gemini in Attacks on the USDeepSeek's rise raises concerns about US AI dominance and security vulnerabilities, particularly in its data handling and safety measures.
DeepSeek LLM jailbreaked to develop malwareAI models can potentially be misused for creating malicious software if prompt techniques are employed effectively.
THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips [27 February]The rise of AI tools like DeepSeek presents significant security challenges that necessitate rigorous scrutiny and regulation.
Anthropic CEO says spies are after $100M AI secrets in a 'few lines of code' | TechCrunchDario Amodei warns about algorithmic espionage by China targeting U.S. AI firms, urging government intervention.
Anthropic CEO warns of Chinese spying on AI companiesDario Amodei warns of Chinese espionage risks for U.S. AI firms, urging immediate government intervention.
Anthropic CEO says spies are after $100M AI secrets in a 'few lines of code' | TechCrunchDario Amodei warns about algorithmic espionage by China targeting U.S. AI firms, urging government intervention.
Anthropic CEO warns of Chinese spying on AI companiesDario Amodei warns of Chinese espionage risks for U.S. AI firms, urging immediate government intervention.
Botguard rebrands to Blackwall, lands 45M to scale AI-powered security: Know more - Silicon CanalsBlackwall raises €45M to expand AI-based security solutions for SMBs and plans significant growth in the US and APAC markets.
DryRun Security Raises $8.7M Seed Round, Unveils Natural Language Code Policies for AppSec TeamsDryRun Security secures $8.7 million funding to enhance application security with AI-driven solutions.
Botguard rebrands to Blackwall, lands 45M to scale AI-powered security: Know more - Silicon CanalsBlackwall raises €45M to expand AI-based security solutions for SMBs and plans significant growth in the US and APAC markets.
DryRun Security Raises $8.7M Seed Round, Unveils Natural Language Code Policies for AppSec TeamsDryRun Security secures $8.7 million funding to enhance application security with AI-driven solutions.
12,000 API keys and passwords were found in a popular AI training dataset - experts say the issue is down to poor identity managementThe exposure of nearly 12,000 valid secrets in AI training datasets reveals significant vulnerabilities in identity management practices.
Britain boffins gear up for AI warfare with RussiaThe UK government has launched a Laboratory for AI Security Research to bolster defenses against AI-related cyber threats, particularly from Russia.
UK AI Safety Institute rebranded AI Security InstituteThe UK government rebranded its AI Safety Institute to AI Security Institute, reflecting a shift towards addressing AI-related crime and security risks.
Britain boffins gear up for AI warfare with RussiaThe UK government has launched a Laboratory for AI Security Research to bolster defenses against AI-related cyber threats, particularly from Russia.
UK AI Safety Institute rebranded AI Security InstituteThe UK government rebranded its AI Safety Institute to AI Security Institute, reflecting a shift towards addressing AI-related crime and security risks.
New hack uses prompt injection to corrupt Gemini's long-term memoryIndirect prompt injection poses a significant security risk, allowing chatbots to execute malicious instructions despite developer safeguards.
DOGE can't use student loan data to dismantle the Education Dept., lawsuit saysDOGE's use of AI in financial oversight raises concerns about data security and decision-making.
US Navy email warns against using AI apps like China's DeepSeekThe US Navy has warned against using the Chinese AI app DeepSeek due to security concerns.A memo sent to Navy personnel is a reminder of existing policy against open-source AI tools.
DeepSeek-R1 more readily generates dangerous content than other large language models | Computer WeeklyDeepSeek poses serious risks due to its high propensity for generating biased and harmful content.
DeepSeek is unsafe, and it's got nothing to do with ChinaDeepSeek chatbot is highly insecure, scoring 100% in accepting malicious prompts, raising concerns about its potential misuse.
D.C. Lawmakers Take Aim at DeepSeekDeepSeek's app may be banned from U.S. government devices due to security threats, similar to previous actions against TikTok.
US Navy email warns against using AI apps like China's DeepSeekThe US Navy has warned against using the Chinese AI app DeepSeek due to security concerns.A memo sent to Navy personnel is a reminder of existing policy against open-source AI tools.
DeepSeek-R1 more readily generates dangerous content than other large language models | Computer WeeklyDeepSeek poses serious risks due to its high propensity for generating biased and harmful content.
DeepSeek is unsafe, and it's got nothing to do with ChinaDeepSeek chatbot is highly insecure, scoring 100% in accepting malicious prompts, raising concerns about its potential misuse.
D.C. Lawmakers Take Aim at DeepSeekDeepSeek's app may be banned from U.S. government devices due to security threats, similar to previous actions against TikTok.
Top 5 AI-Powered Social Engineering AttacksAI is enhancing social engineering tactics by targeting human vulnerabilities on a larger scale.
Research uncovers new attack method, security leaders share insightsThe ConfusedPilot attack may manipulate RAG AI systems, resulting in misinformation and impaired decision-making processes for organizations.
Top 5 AI-Powered Social Engineering AttacksAI is enhancing social engineering tactics by targeting human vulnerabilities on a larger scale.
Research uncovers new attack method, security leaders share insightsThe ConfusedPilot attack may manipulate RAG AI systems, resulting in misinformation and impaired decision-making processes for organizations.
OpenAI announces ChatGPT Enterprise variant for US gov'tOpenAI launched ChatGPT Gov to support US government efforts in AI while ensuring compliance with security and privacy standards.
ChatGPT search tool vulnerable to manipulation and deception, tests showThe ChatGPT search tool is vulnerable to manipulation and may return biased or malicious content based on hidden instructions.
OpenAI announces ChatGPT Enterprise variant for US gov'tOpenAI launched ChatGPT Gov to support US government efforts in AI while ensuring compliance with security and privacy standards.
ChatGPT search tool vulnerable to manipulation and deception, tests showThe ChatGPT search tool is vulnerable to manipulation and may return biased or malicious content based on hidden instructions.
Prompt Security fends off dangers from AI programmersAI-generated code can have vulnerabilities just like human-written code.Prompt Security provides solutions to secure AI-generated code in organizations.New features improve visibility of AI-generated code risks.Balancing security and productivity is essential for organizations using AI tools.
Cisco AI Defense enables secure deployment of AICisco's AI Defense addresses security concerns for organizations deploying AI by enhancing safety measures in their investments.
Beginning the AI ConversationAI's rapid adoption brings significant risks of exploitation, including misinformation, data breaches, and miscommunication.Organizations must understand their use of AI and the associated privacy and ethical implications.
Security threats to AI models are giving rise to a new crop of startupsGrowing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.
Gartner: Mitigating security threats in AI agents | Computer WeeklyAI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.
CyberArk clarifies AI models with FuzzyAICyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.
Beginning the AI ConversationAI's rapid adoption brings significant risks of exploitation, including misinformation, data breaches, and miscommunication.Organizations must understand their use of AI and the associated privacy and ethical implications.
Security threats to AI models are giving rise to a new crop of startupsGrowing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.
Gartner: Mitigating security threats in AI agents | Computer WeeklyAI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.
CyberArk clarifies AI models with FuzzyAICyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.
The vital role of red teaming in safeguarding AI systems and dataRed teaming in AI focuses on safeguarding against undesired outputs and security vulnerabilities to protect AI systems.Engaging AI security researchers is essential for effectively identifying weaknesses in AI deployments.
The HackerNoon Newsletter: Why Does ETH 3.0 Need Lumozs ZK Computing Network? (12/22/2024) | HackerNoonBadGPT-4o unveils vulnerabilities in AI models by removing safety measures.Mailbird is now available for Mac, enhancing accessibility for email users.
The Python AI library hack that didn't hack PythonPython is the fastest-growing and most popular programming language for 2024.
Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business LeadersAttending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.
Implement AI without data risksEducating users is essential for safe AI implementation and data protection.
Security in Generative AI Infrastructure Is of Critical Importance | HackerNoonSecurity in AI is essential and should be integrated throughout the infrastructure.
Can you read your manager's emails via Copilot?Microsoft is addressing security issues with its AI Copilot assistant to prevent unauthorized access to sensitive data.
How CISOs are navigating the "slippery" AI data protection problemEmployees are unintentionally risking company data security by using generative AI tools without understanding the consequences.
Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business LeadersAttending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.
Implement AI without data risksEducating users is essential for safe AI implementation and data protection.
Security in Generative AI Infrastructure Is of Critical Importance | HackerNoonSecurity in AI is essential and should be integrated throughout the infrastructure.
Can you read your manager's emails via Copilot?Microsoft is addressing security issues with its AI Copilot assistant to prevent unauthorized access to sensitive data.
How CISOs are navigating the "slippery" AI data protection problemEmployees are unintentionally risking company data security by using generative AI tools without understanding the consequences.
Apple defines what we should expect from cloud-based AI securityApple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI SecurityApple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.
Apple defines what we should expect from cloud-based AI securityApple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI SecurityApple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.
New IBM solution advances security in AI and quantum eraIBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.
Palo Alto closes deal to bite on IBM QRadar securityPalo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.
New IBM solution advances security in AI and quantum eraIBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.
Palo Alto closes deal to bite on IBM QRadar securityPalo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix | HackerNoonGuardrails significantly enhance the stability and security of AI models, providing resistance against jailbreak attempts.
More than one-third of tech professionals report AI skills shortageThere is a significant skills gap in AI security and cloud security within technology teams.
Enterprises beware, your LLM servers could be exposing sensitive dataPublic AI platforms, like vector databases and LLM tools, may compromise corporate data security through vulnerabilities and potential data exposure.
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 CopilotA vulnerability in Microsoft 365 Copilot allowed sensitive user data theft via a technique called ASCII smuggling, which has since been patched.