#ai-security

[ follow ]
#cybersecurity

Apple will pay you up to $1 million if you can hack into Apple Intelligence servers

Apple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.

Red teaming large language models: Enterprise security in the AI era

Red teaming AI models is essential to identify vulnerabilities and to stay ahead of evolving AI security threats.

HackerOne: 48% of Security Professionals Believe AI Is Risky

AI poses significant security risks, necessitating reassessment of strategies.
External reviews are crucial for identifying AI-related safety issues.

Over half of tech leaders cite phishing as a top security concern

There is a significant skills gap in AI and cloud security, posing risks as organizations adopt new technologies.
Ongoing education and training are critical for security professionals to adapt to evolving threats.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems like Copilot can be manipulated by hackers through email hijacking and data poisoning, posing significant security risks.

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Unauthorized JavaScript execution in AI chatbots risks account takeovers via prompt injection attacks.

Apple will pay you up to $1 million if you can hack into Apple Intelligence servers

Apple launches a bug bounty for testing its Private Cloud Compute's security, offering up to $1 million for successful hacks.

Red teaming large language models: Enterprise security in the AI era

Red teaming AI models is essential to identify vulnerabilities and to stay ahead of evolving AI security threats.

HackerOne: 48% of Security Professionals Believe AI Is Risky

AI poses significant security risks, necessitating reassessment of strategies.
External reviews are crucial for identifying AI-related safety issues.

Over half of tech leaders cite phishing as a top security concern

There is a significant skills gap in AI and cloud security, posing risks as organizations adopt new technologies.
Ongoing education and training are critical for security professionals to adapt to evolving threats.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems like Copilot can be manipulated by hackers through email hijacking and data poisoning, posing significant security risks.

Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI

Unauthorized JavaScript execution in AI chatbots risks account takeovers via prompt injection attacks.
morecybersecurity
#data-governance

Protecting your data in the age of AI

Organizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.
Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.

Can you read your manager's emails via Copilot?

Microsoft is addressing security issues with its AI Copilot assistant to prevent unauthorized access to sensitive data.

Protecting your data in the age of AI

Organizations must balance leveraging AI's transformative power with the critical necessity of protecting data integrity.
Data leakage in AI systems can expose sensitive information, requiring robust governance strategies to mitigate risks.

Can you read your manager's emails via Copilot?

Microsoft is addressing security issues with its AI Copilot assistant to prevent unauthorized access to sensitive data.
moredata-governance
#artificial-intelligence

Security threats to AI models are giving rise to a new crop of startups

Growing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.

Gartner: Mitigating security threats in AI agents | Computer Weekly

AI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.

CyberArk clarifies AI models with FuzzyAI

CyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.

Security threats to AI models are giving rise to a new crop of startups

Growing security and privacy concerns accompany AI adoption, necessitating companies to implement safeguards against data leaks and compliance risks.

Gartner: Mitigating security threats in AI agents | Computer Weekly

AI agents offer transformative benefits but introduce significant security risks that organizations must proactively address.

CyberArk clarifies AI models with FuzzyAI

CyberArk's FuzzyAI framework enhances security for AI models by identifying vulnerabilities and addressing potential threats through systematic testing.
moreartificial-intelligence

The Python AI library hack that didn't hack Python

Python is the fastest-growing and most popular programming language for 2024.

Britain boffins gear up for AI warfare with Russia

The UK government has launched a Laboratory for AI Security Research to bolster defenses against AI-related cyber threats, particularly from Russia.
#generative-ai

Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business Leaders

Attending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.

Implement AI without data risks

Educating users is essential for safe AI implementation and data protection.

Security in Generative AI Infrastructure Is of Critical Importance | HackerNoon

Security in AI is essential and should be integrated throughout the infrastructure.

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks | TechCrunch

Anthropic is launching a program to fund the development of new AI benchmarks to evaluate models, focusing on safety and societal impact.

SentinelOne's AI-SPM shines a light on AI threats lurking in the shadows

SentinelOne aims to secure AI applications with Security Posture Management (SPM) to address vulnerabilities and misconfigurations in cloud environments.

Generative AI security tools are a risky enterprise investment - WithSecure wants to change that

WithSecure launches generative AI security tool Luminen to enhance processes and reduce costs for enterprises.

Generative AI Breakthroughs at ODSC West 2024: 6 Must-Attend Sessions for Business Leaders

Attending the Generative AI X Summit will enhance your understanding of AI integration, security, and value quantification for business growth.

Implement AI without data risks

Educating users is essential for safe AI implementation and data protection.

Security in Generative AI Infrastructure Is of Critical Importance | HackerNoon

Security in AI is essential and should be integrated throughout the infrastructure.

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks | TechCrunch

Anthropic is launching a program to fund the development of new AI benchmarks to evaluate models, focusing on safety and societal impact.

SentinelOne's AI-SPM shines a light on AI threats lurking in the shadows

SentinelOne aims to secure AI applications with Security Posture Management (SPM) to address vulnerabilities and misconfigurations in cloud environments.

Generative AI security tools are a risky enterprise investment - WithSecure wants to change that

WithSecure launches generative AI security tool Luminen to enhance processes and reduce costs for enterprises.
moregenerative-ai
#data-protection

Noma is building tools to spot security issues with AI apps | TechCrunch

The complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Over three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.
Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.

Oktane 2024: agents, security and the fight against SaaS chaos

Okta aims to standardize application security building blocks to eliminate breaches caused by stolen credentials.

Noma is building tools to spot security issues with AI apps | TechCrunch

The complexity of AI applications is increasing organizations' susceptibility to cyber threats, raising concerns among IT leaders.

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

Over three dozen security vulnerabilities exist in open-source AI/ML models, posing risks of remote code execution and data theft.
Severe flaws have been discovered in popular AI models like Lunary, ChuanhuChatGPT, and LocalAI.

Oktane 2024: agents, security and the fight against SaaS chaos

Okta aims to standardize application security building blocks to eliminate breaches caused by stolen credentials.
moredata-protection
#apple

Apple defines what we should expect from cloud-based AI security

Apple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.

Apple AI leaves Elon Musk bitter

Elon Musk threatened to ban all Apple devices from his companies if Apple integrates OpenAI due to security concerns.

Apple defines what we should expect from cloud-based AI security

Apple's new cloud-based AI system prioritizes security and invites research, setting a standard for others to follow in data protection.

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple's Private Cloud Compute platform enables researchers to verify its privacy and security measures, promoting transparency in AI.

Apple AI leaves Elon Musk bitter

Elon Musk threatened to ban all Apple devices from his companies if Apple integrates OpenAI due to security concerns.
moreapple
#ibm

New IBM solution advances security in AI and quantum era

IBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.

Palo Alto closes deal to bite on IBM QRadar security

Palo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.

New IBM solution advances security in AI and quantum era

IBM's Guardium Data Security Center offers a comprehensive approach to data security for hybrid cloud infrastructures and AI environments.

Palo Alto closes deal to bite on IBM QRadar security

Palo Alto Networks acquires IBM's QRadar SaaS to enhance its AI security platform.
moreibm

Research uncovers new attack method, security leaders share insights

The ConfusedPilot attack may manipulate RAG AI systems, resulting in misinformation and impaired decision-making processes for organizations.
#model-performance

Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix | HackerNoon

Guardrails significantly enhance the stability and security of AI models, providing resistance against jailbreak attempts.

X's Grok AI is great - if you want to know how to make drugs

Grok AI model is susceptible to jailbreaking and can provide detailed instructions on illegal activities.
Some AI models lack filters to prevent the generation of dangerous or illegal content.

Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix | HackerNoon

Guardrails significantly enhance the stability and security of AI models, providing resistance against jailbreak attempts.

X's Grok AI is great - if you want to know how to make drugs

Grok AI model is susceptible to jailbreaking and can provide detailed instructions on illegal activities.
Some AI models lack filters to prevent the generation of dangerous or illegal content.
moremodel-performance

More than one-third of tech professionals report AI skills shortage

There is a significant skills gap in AI security and cloud security within technology teams.

Enterprises beware, your LLM servers could be exposing sensitive data

Public AI platforms, like vector databases and LLM tools, may compromise corporate data security through vulnerabilities and potential data exposure.

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

A vulnerability in Microsoft 365 Copilot allowed sensitive user data theft via a technique called ASCII smuggling, which has since been patched.

Microsoft's AI Can Be Turned Into an Automated Phishing Machine

AI systems are vulnerable to attacks through email hijacking and poisoned databases, potentially leading to manipulative actions by hackers.
Security researchers stress the risks of allowing external data into AI systems, emphasizing the importance of prevention and monitoring.
Microsoft recognizes and collaborates on identifying AI vulnerabilities, highlighting the need for security measures to prevent post-compromise abuse.

Protect AI Raises $60M in Series B Funding to Bolster AI Security Leadership

Protect AI secures $60 million Series B funding to enhance AI security, reaching a total of $108.5 million raised.
#cosai

The biggest names in AI have teamed up to promote AI security

Big names in AI form Coalition for Secure AI (CoSAI) to address fragmented landscape of AI security.

Transparency is "vital" in big tech's new coalition on AI safety, experts suggest

Tech companies form CoSAI for AI security.

The biggest names in AI have teamed up to promote AI security

Big names in AI form Coalition for Secure AI (CoSAI) to address fragmented landscape of AI security.

Transparency is "vital" in big tech's new coalition on AI safety, experts suggest

Tech companies form CoSAI for AI security.
morecosai

Google Establishes New Industry Group Focused on Secure AI Development

Google establishes Coalition for Secure AI (CoSAI) to enhance AI security measures.

Westcon-Comstor and Vector AI expand European distribution agreement

Westcon-Comstor expands partnership with AI security provider Vector AI into UK, Ireland, and Nordic markets.

Major Tech CEOs from Google, OpenAI, Microsoft, and More Join Federal AI Safety Panel

Government partners with tech leaders to enhance national AI security.
#microsoft

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

A jailbreaking method named Skeleton Key can make AI models disclose harmful information by bypassing guardrails.
Microsoft recommends enhancing guardrails and monitoring AI systems to counteract the Skeleton Key jailbreaking technique.

Microsoft Deploys Powerful New AI Completely Disconnected From the Internet

Microsoft develops air-gapped AI model for US intelligence agencies to analyze top-secret data safely.

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

A jailbreaking method named Skeleton Key can make AI models disclose harmful information by bypassing guardrails.
Microsoft recommends enhancing guardrails and monitoring AI systems to counteract the Skeleton Key jailbreaking technique.

Microsoft Deploys Powerful New AI Completely Disconnected From the Internet

Microsoft develops air-gapped AI model for US intelligence agencies to analyze top-secret data safely.
moremicrosoft

AI security bill aims to prevent safety breaches of AI models

A new bill, the Secure Artificial Intelligence Act, aims to establish a database to track AI system breaches and focus on counter-AI techniques.

AI-powered martech releases and news: April 25 | MarTech

AI can be misused, as shown by a teacher creating a fake offensive audio recording.
Educators need to be cautious when utilizing AI tools to avoid potential misuse and consequences.

"Social engineering" hacks work on chatbots, too

Over 2,200 hackers participated in a challenge testing the security of AI models.
Approximately 15.5% of conversations successfully manipulated AI models to break rules or share sensitive data.

Protect AI expands efforts to secure LLMs with open source acquisition

Protect AI has acquired Laiyer AI to enhance the capabilities of its AI security platform, Radar, in order to better protect against risks from large language models (LLMs).
Protect AI raised $35 million in funding in July 2023 to expand its AI security efforts.
#adversarial-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously
Existing defense strategies are inadequate against multi-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously
Existing defense strategies are inadequate against multi-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.
moreadversarial-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.

What's included in the EU's new rules on AI and what it means for future regulation

The European Union has released the world's first set of rules for the use of artificial intelligence.
The regulations focus on AI safety, preventing discrimination, and protecting privacy, but may not adequately address practical aspects of AI security.

CISA and NCSC lead efforts to raise AI security standards

The UK's NCSC and US's CISA have published official guidance for securing AI applications, endorsed by 17 other countries.
The guidance aims to ensure that security is inherent in AI's development and that it is not treated as an afterthought.
The guidelines promote a secure-by-design approach and apply to both new and existing AI applications.

CISA and NCSC lead efforts to raise AI security standards

The UK's NCSC and US's CISA have published official guidance for securing AI applications, endorsed by 17 other countries.
The guidance aims to ensure that security is inherent in AI's development and that it is not treated as an afterthought.
The guidelines promote a secure-by-design approach and apply to both new and existing AI applications.

CISA and NCSC lead efforts to raise AI security standards

The UK's NCSC and US's CISA have published official guidance for securing AI applications, endorsed by 17 other countries.
The guidance aims to ensure that security is inherent in AI's development and that it is not treated as an afterthought.
The guidelines promote a secure-by-design approach and apply to both new and existing AI applications.

CISA and NCSC lead efforts to raise AI security standards

The UK's NCSC and US's CISA have published official guidance for securing AI applications, endorsed by 17 other countries.
The guidance aims to ensure that security is inherent in AI's development and that it is not treated as an afterthought.
The guidelines promote a secure-by-design approach and apply to both new and existing AI applications.

CISA and NCSC lead efforts to raise AI security standards

The UK's NCSC and US's CISA have published official guidance for securing AI applications, endorsed by 17 other countries.
The guidance aims to ensure that security is inherent in AI's development and that it is not treated as an afterthought.
The guidelines promote a secure-by-design approach and apply to both new and existing AI applications.
[ Load more ]