#llm-security

[ follow ]
fromArs Technica
1 week ago

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Microsoft's warning on Tuesday that an experimental AI Agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained? As reported Tuesday, Microsoft introduced Copilot Actions, a new set of "experimental agentic features"
Information security
Artificial intelligence
fromTheregister
1 week ago

EchoGram tokens like '=coffee' flip AI guardrail verdicts

EchoGram uses short tokens (for example =coffee) to bypass LLM guardrails, enabling prompt injection and jailbreaking of model safety filters.
Information security
fromTheregister
2 weeks ago

LLM side-channel attack could allow snoops to guess topic

A side-channel attack named Whisper Leak can infer prompt topics from encrypted streaming LLM traffic by analyzing packet size and timing, exposing user communications.
Artificial intelligence
fromSecurityWeek
1 month ago

Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?

AI security posture management continuously detects, evaluates, and remediates AI and LLM security and compliance risks, enabling transparent, governed, and policy-aligned AI use.
Startup companies
fromBusiness Insider
2 months ago

This AI startup helps developers safely and cheaply build on top of LLMs from OpenAI and Anthropic. Read its pitch deck.

Requesty centralizes developer access to multiple AI providers, tightening security, enforcing governance, and optimizing costs across large language models.
#prompt-injection
Information security
fromSecuritymagazine
2 months ago

Generative AI Remains Growing Concern for Organizations

Generative AI adoption is outpacing enterprise security readiness, leaving LLM deployments under-assessed and high-severity vulnerabilities largely unresolved.
fromTheregister
2 months ago

AI can't stop the sprint to adopt hot tech without security

Ollama provides a framework that makes it possible to run large language models locally, on a desktop machine or server. Cisco decided to research it because, in the words of Senior Incident Response Architect Dr. Giannis Tziakouris, Ollama has "gained popularity for its ease of use and local deployment capabilities." Talos researchers used the Shodan scanning tool to find unsecured Ollama servers, and spotted over 1,100, around 20 percent of which are "actively hosting models susceptible to unauthorized access."
Information security
Artificial intelligence
fromTheregister
2 months ago

GitHub engineer: team 'coerced' to put Grok in Copilot

GitHub is adding xAI's Grok Code Fast 1 to Copilot while a whistleblower alleges inadequate security testing and an engineering team under duress.
[ Load more ]