#adversarial-attacks

[ follow ]

Certain names make ChatGPT grind to a halt, and we know why

Hard-coded filters can inadvertently disrupt usability and functionality in AI interactions, particularly for common names.
AI tools face challenges from adversarial attacks that exploit system vulnerabilities, requiring ongoing evaluation and adjustment.
#artificial-intelligence

Why Cybercriminals Are Not Necessarily Embracing AI | HackerNoon

AI aids malware detection but also introduces new cyber threats, demonstrated by threat actors using tools like ChatGPT.

Pentagon launches plan to keep its AI-powered tech from being hijacked

AI systems vulnerable to adversarial attacks with visual 'noise' patches.
Pentagon's GARD program works on identifying and defending against such vulnerabilities.

Why Cybercriminals Are Not Necessarily Embracing AI | HackerNoon

AI aids malware detection but also introduces new cyber threats, demonstrated by threat actors using tools like ChatGPT.

Pentagon launches plan to keep its AI-powered tech from being hijacked

AI systems vulnerable to adversarial attacks with visual 'noise' patches.
Pentagon's GARD program works on identifying and defending against such vulnerabilities.
moreartificial-intelligence

BEAST AI attack can break AI guardrails in 60 seconds

Efficient adversarial attack phrases on LLMs developed by UMD computer scientists.
BEAST technique for fast adversarial attacks requires Nvidia RTX A6000 GPU and minimal processing time.
#ai-security

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously
Existing defense strategies are inadequate against multi-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously
Existing defense strategies are inadequate against multi-attacks

New Method Hopes to Understand Multi-Attacks Against Image Classification Systems

Adversarial attacks can manipulate the classifications of multiple images simultaneously.
A new methodology using standard optimization techniques has been introduced for executing multi-attacks.
moreai-security
#ai-systems

Images altered to trick machine vision can influence humans too

Even subtle changes to digital images can affect human perception
Adversarial images can mislead both AI systems and humans

Can AI Be Superhuman? Flaws in Top Gaming Bot Cast Doubt

Superhuman AI systems, like bots playing Go, can have vulnerabilities impacting safety and reliability.

Images altered to trick machine vision can influence humans too

Even subtle changes to digital images can affect human perception
Adversarial images can mislead both AI systems and humans

Can AI Be Superhuman? Flaws in Top Gaming Bot Cast Doubt

Superhuman AI systems, like bots playing Go, can have vulnerabilities impacting safety and reliability.
moreai-systems

AI networks are more vulnerable to malicious attacks than previously thought

Artificial intelligence tools are more vulnerable to targeted attacks than previously believed, putting applications like autonomous vehicles and medical image interpretation at risk.
Adversarial attacks, in which data is manipulated to confuse AI systems, can cause them to make inaccurate decisions.
[ Load more ]