#ai-security

[ follow ]

Prompt Security fends off dangers from AI programmers

AI-generated code can have vulnerabilities just like human-written code.
Prompt Security provides solutions to secure AI-generated code in organizations.
New features improve visibility of AI-generated code risks.
Balancing security and productivity is essential for organizations using AI tools.
#generative-ai

Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business Leaders

Attending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.

Implement AI without data risks

Educating users is essential for safe AI implementation and data protection.

Secure AI? Dream on, says AI red team

Microsoft's AI Red Team emphasizes that the development of safe AI systems is an ongoing, incomplete process, requiring constant evaluation and adaptation.

Microsoft AI Red Team says security work will never be done

AI security is a continuous challenge as generative models amplify existing risks.
Understanding the specific capabilities and applications of AI systems is critical for effective security.

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft is taking legal action against hackers exploiting its AI services for harmful content generation.
The company has implemented new safeguards after discovering the breach.
Nation-state groups are using AI tools for malicious intents.

Security in Generative AI Infrastructure Is of Critical Importance | HackerNoon

Security in AI is essential and should be integrated throughout the infrastructure.

Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business Leaders

Attending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.

Implement AI without data risks

Educating users is essential for safe AI implementation and data protection.

Secure AI? Dream on, says AI red team

Microsoft's AI Red Team emphasizes that the development of safe AI systems is an ongoing, incomplete process, requiring constant evaluation and adaptation.

Microsoft AI Red Team says security work will never be done

AI security is a continuous challenge as generative models amplify existing risks.
Understanding the specific capabilities and applications of AI systems is critical for effective security.

Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation

Microsoft is taking legal action against hackers exploiting its AI services for harmful content generation.
The company has implemented new safeguards after discovering the breach.
Nation-state groups are using AI tools for malicious intents.

Security in Generative AI Infrastructure Is of Critical Importance | HackerNoon

Security in AI is essential and should be integrated throughout the infrastructure.
moregenerative-ai
#cybersecurity

Apple will pay you up to $1 million if you can hack into Apple Intelligence servers

Apple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.

Red teaming large language models: Enterprise security in the AI era

Red teaming AI models is essential to identify vulnerabilities and to stay ahead of evolving AI security threats.

HackerOne: 48% of Security Professionals Believe AI Is Risky

AI poses significant security risks, necessitating reassessment of strategies.
External reviews are crucial for identifying AI-related safety issues.

Over half of tech leaders cite phishing as a top security concern

There is a significant skills gap in AI and cloud security, posing risks as organizations adopt new technologies.
Ongoing education and training are critical for security professionals to adapt to evolving threats.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems like Copilot can be manipulated by hackers through email hijacking and data poisoning, posing significant security risks.

3 ways AI will transform security in 2025

AI evolution has transformed from simple pattern recognition to complex variable outputs, enhancing user interaction but also introducing significant security challenges.

Apple will pay you up to $1 million if you can hack into Apple Intelligence servers

Apple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.

Red teaming large language models: Enterprise security in the AI era

Red teaming AI models is essential to identify vulnerabilities and to stay ahead of evolving AI security threats.

HackerOne: 48% of Security Professionals Believe AI Is Risky

AI poses significant security risks, necessitating reassessment of strategies.
External reviews are crucial for identifying AI-related safety issues.

Over half of tech leaders cite phishing as a top security concern

There is a significant skills gap in AI and cloud security, posing risks as organizations adopt new technologies.
Ongoing education and training are critical for security professionals to adapt to evolving threats.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems like Copilot can be manipulated by hackers through email hijacking and data poisoning, posing significant security risks.

3 ways AI will transform security in 2025

AI evolution has transformed from simple pattern recognition to complex variable outputs, enhancing user interaction but also introducing significant security challenges.
morecybersecurity

Cisco AI Defense enables secure deployment of AI

Cisco's AI Defense addresses security concerns for organizations deploying AI by enhancing safety measures in their investments.
#artificial-intelligence

Beginning the AI Conversation

AI's rapid adoption brings significant risks of exploitation, including misinformation, data breaches, and miscommunication.
Organizations must understand their use of AI and the associated privacy and ethical implications.

Security threats to AI models are giving rise to a new crop of startups

Growing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.

Gartner: Mitigating security threats in AI agents | Computer Weekly

AI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.

CyberArk clarifies AI models with FuzzyAI

CyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.

Beginning the AI Conversation

AI's rapid adoption brings significant risks of exploitation, including misinformation, data breaches, and miscommunication.
Organizations must understand their use of AI and the associated privacy and ethical implications.

Security threats to AI models are giving rise to a new crop of startups

Growing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.

Gartner: Mitigating security threats in AI agents | Computer Weekly

AI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.

CyberArk clarifies AI models with FuzzyAI

CyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.
moreartificial-intelligence
#data-protection

Noma is building tools to spot security issues with AI apps | TechCrunch

The complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.

Protecting your data in the age of AI

Organizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.
Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Over three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.
Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.

Product Review: How Reco Discovers Shadow AI in SaaS

Shadow AI poses significant security risks as employees use unauthorized AI tools without IT knowledge, risking data leaks and system vulnerabilities.

Oktane 2024: agents, security and the fight against SaaS chaos

Okta aims to standardize application security building blocks to eliminate breaches caused by stolen credentials.

Noma is building tools to spot security issues with AI apps | TechCrunch

The complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.

Protecting your data in the age of AI

Organizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.
Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Over three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.
Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.

Product Review: How Reco Discovers Shadow AI in SaaS

Shadow AI poses significant security risks as employees use unauthorized AI tools without IT knowledge, risking data leaks and system vulnerabilities.

Oktane 2024: agents, security and the fight against SaaS chaos

Okta aims to standardize application security building blocks to eliminate breaches caused by stolen credentials.
moredata-protection

The vital role of red teaming in safeguarding AI systems and data

Red teaming in AI focuses on safeguarding against undesired outputs and security vulnerabilities to protect AI systems.
Engaging AI security researchers is essential for effectively identifying weaknesses in AI deployments.

ChatGPT search tool vulnerable to manipulation and deception, tests show

The ChatGPT search tool is vulnerable to manipulation and may return biased or malicious content based on hidden instructions.

The HackerNoon Newsletter: Why Does ETH 3.0 Need Lumozs ZK Computing Network? (12/22/2024) | HackerNoon

BadGPT-4o unveils vulnerabilities in AI models by removing safety measures.
Mailbird is now available for Mac, enhancing accessibility for email users.

The Python AI library hack that didn't hack Python

Python is the fastest-growing and most popular programming language for 2024.

Britain boffins gear up for AI warfare with Russia

The UK government has launched a Laboratory for AI Security Research to bolster defenses against AI-related cyber threats, particularly from Russia.
#apple

Apple defines what we should expect from cloud-based AI security

Apple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.

Apple AI leaves Elon Musk bitter

Elon Musk threatened to ban all Apple devices from his companies if Apple integrates OpenAI due to security concerns.

Apple defines what we should expect from cloud-based AI security

Apple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.

Apple AI leaves Elon Musk bitter

Elon Musk threatened to ban all Apple devices from his companies if Apple integrates OpenAI due to security concerns.
moreapple
#ibm

New IBM solution advances security in AI and quantum era

IBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.

Palo Alto closes deal to bite on IBM QRadar security

Palo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.

New IBM solution advances security in AI and quantum era

IBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.

Palo Alto closes deal to bite on IBM QRadar security

Palo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.
moreibm

Research uncovers new attack method, security leaders share insights

The ConfusedPilot attack may manipulate RAG AI systems, resulting in misinformation and impaired decision-making processes for organizations.
#model-performance

Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix | HackerNoon

Guardrails significantly enhance the stability and security of AI models, providing resistance against jailbreak attempts.

X's Grok AI is great - if you want to know how to make drugs

Grok AI model is susceptible to jailbreaking and can provide detailed instructions on illegal activities.
Some AI models lack filters to prevent the generation of dangerous or illegal content.

Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix | HackerNoon

Guardrails significantly enhance the stability and security of AI models, providing resistance against jailbreak attempts.

X's Grok AI is great - if you want to know how to make drugs

Grok AI model is susceptible to jailbreaking and can provide detailed instructions on illegal activities.
Some AI models lack filters to prevent the generation of dangerous or illegal content.
moremodel-performance

More than one-third of tech professionals report AI skills shortage

There is a significant skills gap in AI security and cloud security within technology teams.

Enterprises beware, your LLM servers could be exposing sensitive data

Public AI platforms, like vector databases and LLM tools, may compromise corporate data security through vulnerabilities and potential data exposure.

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

A vulnerability in Microsoft 365 Copilot allowed sensitive user data theft via a technique called ASCII smuggling, which has since been patched.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems are vulnerable to attacks through email hijacking and poisoned databases, potentially leading to manipulative actions by hackers.
Security researchers stress the risks of allowing external data into AI systems, emphasizing the importance of prevention and monitoring.
Microsoft recognizes and collaborates on identifying AI vulnerabilities, highlighting the need for security measures to prevent post-compromise abuse.

Protect AI Raises $60M in Series B Funding to Bolster AI Security Leadership

Protect AI secures $60 million Series B funding to enhance AI security, reaching a total of $108.5 million raised.
#cosai

The biggest names in AI have teamed up to promote AI security

Big names in AI form Coalition for Secure AI (CoSAI) to address fragmented landscape of AI security.

Transparency is "vital" in big tech's new coalition on AI safety, experts suggest

Tech companies form CoSAI for AI security.

The biggest names in AI have teamed up to promote AI security

Big names in AI form Coalition for Secure AI (CoSAI) to address fragmented landscape of AI security.

Transparency is "vital" in big tech's new coalition on AI safety, experts suggest

Tech companies form CoSAI for AI security.
morecosai

Google Establishes New Industry Group Focused on Secure AI Development

Google establishes Coalition for Secure AI (CoSAI) to enhance AI security measures.

Westcon-Comstor and Vector AI expand European distribution agreement

Westcon-Comstor expands partnership with AI security provider Vector AI into UK, Ireland, and Nordic markets.

Major Tech CEOs from Google, OpenAI, Microsoft, and More Join Federal AI Safety Panel

Government partners with tech leaders to enhance national AI security.
#microsoft

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

A jailbreaking method named Skeleton Key can make AI models disclose harmful information by bypassing guardrails.
Microsoft recommends enhancing guardrails and monitoring AI systems to counteract the Skeleton Key jailbreaking technique.

Microsoft Deploys Powerful New AI Completely Disconnected From the Internet

Microsoft develops air-gapped AI model for US intelligence agencies to analyze top-secret data safely.

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

A jailbreaking method named Skeleton Key can make AI models disclose harmful information by bypassing guardrails.
Microsoft recommends enhancing guardrails and monitoring AI systems to counteract the Skeleton Key jailbreaking technique.

Microsoft Deploys Powerful New AI Completely Disconnected From the Internet

Microsoft develops air-gapped AI model for US intelligence agencies to analyze top-secret data safely.
moremicrosoft

AI security bill aims to prevent safety breaches of AI models

A new bill, the Secure Artificial Intelligence Act, aims to establish a database to track AI system breaches and focus on counter-AI techniques.

AI-powered martech releases and news: April 25 | MarTech

AI can be misused, as shown by a teacher creating a fake offensive audio recording.
Educators need to be cautious when utilizing AI tools to avoid potential misuse and consequences.

"Social engineering" hacks work on chatbots, too

Over 2,200 hackers participated in a challenge testing the security of AI models.
Approximately 15.5% of conversations successfully manipulated AI models to break rules or share sensitive data.

Protect AI expands efforts to secure LLMs with open source acquisition

Protect AI has acquired Laiyer AI to enhance the capabilities of its AI security platform, Radar, in order to better protect against risks from large language models (LLMs).
Protect AI raised $35 million in funding in July 2023 to expand its AI security efforts.

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously
Existing defense strategies are inadequate against multi-attacks
[ Load more ]