#shadow-ai

[ follow ]
fromTechRepublic
1 week ago

Viral AI Caricatures Highlight Shadow AI Dangers

"While many have been discussing the privacy risks of people following the ChatGPT caricature trend, the prompt reveals something else alarming - people are talking to their LLMs about work," said Josh Davies, principal market strategist at Fortra, in an email to eSecurityPlanet. He added, "If they are not using a sanctioned ChatGPT instance, they may be inputting sensitive work information into a public LLM. Those who publicly share these images may be putting a target on their back for social engineering attempts, and malicious actors have millions of entries to select attractive targets from."
Information security
Information security
fromTechzine Global
1 week ago

Okta tackles shadow AI with new agent discovery tools

Agent Discovery provides visibility into unauthorized AI agents by detecting OAuth connections and mapping unsanctioned AI tool access and permissions to corporate apps.
fromSecuritymagazine
1 week ago

Shadow AI: The Invisible Insider Threat

Shadow AI is the unsanctioned use of artificial intelligence tools outside of an organization's governance framework. In the healthcare field, clinicians and staff are increasingly using unvetted AI tools to improve efficiency, from transcription to summarization. Most of this activity is well-intentioned. But when AI adoption outpaces governance, sensitive data can quietly leave organizational control. Blocking AI outright isn't realistic. The more effective approach is to make safe, governed AI easier to use than unsafe alternatives.
Privacy professionals
#ai-governance
fromInfoQ
2 months ago
Information security

JFrog Unveils "Shadow AI Detection" to Tackle Hidden AI Risks in Enterprise Software Supply Chains

fromIT Pro
3 months ago
Artificial intelligence

Gartner says 40% of enterprises will experience 'shadow AI' breaches by 2030 - educating staff is the key to avoiding disaster

fromInfoQ
2 months ago
Information security

JFrog Unveils "Shadow AI Detection" to Tackle Hidden AI Risks in Enterprise Software Supply Chains

fromIT Pro
3 months ago
Artificial intelligence

Gartner says 40% of enterprises will experience 'shadow AI' breaches by 2030 - educating staff is the key to avoiding disaster

Artificial intelligence
fromTechCrunch
1 month ago

Rogue agents and shadow AI: Why VCs are betting big on AI security | TechCrunch

Enterprise AI agents can pursue goals by developing harmful sub-goals like blackmail when misaligned and lacking contextual understanding.
Information security
fromTechzine Global
1 month ago

New (and renewed) cybersecurity trends for 2026

AI-enabled deepfakes will escalate phishing across email, audio, and video, requiring multiple trusted channels, passkeys, and employee education against shadow AI.
Artificial intelligence
fromAlleywatch
2 months ago

Ciphero Raises $2.5M to Build AI-Native Security for Agentic Workflows

Ciphero provides an AI Verification Layer that captures, verifies, and enforces dynamic policies across enterprise AI interactions to detect and prevent Shadow AI and breaches.
#ai-security
fromIT Pro
6 months ago
Privacy professionals

AI breaches aren't just a scare story any more - they're happening in real life

fromIT Pro
6 months ago
Privacy professionals

AI breaches aren't just a scare story any more - they're happening in real life

fromTechzine Global
2 months ago

The rise (and fall?) of shadow AI

As software application development teams now start to embrace an increasing number of automation tools to provide AI-driven (or at least AI-assisted) coding functions in their codebases, a Newtonian equal and opposite reaction is also surfacing in the shape of governance controls and guardrails to keep AI injections in check as these technologies now surface in the software supply chain.
Information security
Marketing tech
fromMarTech
3 months ago

The AI oversight gap is marketing's next governance test | MarTech

Ungoverned AI adoption in marketing increases enterprise breach risk and costs; MOps must lead governance to prevent shadow AI exposure.
Artificial intelligence
fromMedium
3 months ago

5 Rules for 2026 Enterprise AI

Enterprise AI success requires extensible, agentic architectures and user-empowering adoption strategies to avoid brittleness and leverage evolving third-party capabilities.
Artificial intelligence
fromIT Pro
4 months ago

Microsoft says 71% of workers have used unapproved AI tools at work - and it's a trend that enterprises need to crack down on

Widespread unapproved use of consumer AI tools in UK workplaces creates significant security and privacy risks despite measurable productivity gains.
fromPsychology Today
5 months ago

Psychological Safety Drives AI Adoption

The people who most need to experiment with AI-those in routine cognitive roles-experience the highest psychological threat. They're being asked to enthusiastically adopt tools that might replace them, triggering what neuroscientists call a "threat state." Research by Amy Edmondson at Harvard Business School reveals that team learning requires psychological safety-the belief that interpersonal risk-taking feels safe. But AI adoption adds an existential twist: The threat isn't just social embarrassment; it's professional survival.
Artificial intelligence
Artificial intelligence
fromFast Company
5 months ago

What is 'BYOAI' and why it's a serious threat to your company

Employees are rapidly adopting unapproved AI agents (BYOAI) to boost productivity, creating governance, data leakage, and power-asymmetry risks that outpace corporate controls.
Artificial intelligence
fromFortune
5 months ago

OpenEvidence is one of the hottest AI startups out there. But the CEO of Corti argues there's plenty of value 'down the AI stack'

OpenEvidence's generative AI is widely adopted by U.S. physicians, drove rapid VC funding and valuation growth, and raises shadow AI, privacy, and compliance concerns for healthcare institutions.
fromThe Hacker News
5 months ago

Shadow AI Discovery: A Critical Part of Enterprise AI Governance

In most cases, employees are driving adoption from the bottom up, often without oversight, while governance frameworks are still being defined from the top down. Even if they have enterprise-sanctioned tools, they are often eschewing these in favor of other newer tools that are better-placed to improve their productivity. Unless security leaders understand this reality, uncover and govern this activity, they are exposing the business to significant risks.
Artificial intelligence
Information security
fromDevOps.com
5 months ago

Staying on Top of Shadow AI - DevOps.com

Unsanctioned AI tools are rapidly adopted across organizations, reshaping decisions, exposing data, and creating governance and compliance risks before leadership notices.
Artificial intelligence
fromHR Brew
6 months ago

Millennials and Gen Z employees are using unsanctioned AI tools at work

Gen Z and millennial employees widely use AI tools at work, often unsanctioned and hesitant to disclose their reliance.
Artificial intelligence
fromFortune
6 months ago

The 'shadow AI economy' is booming: Workers at 90% of companies say they use chatbots, but most of them are hiding it from IT

The shadow AI economy is thriving with personal chatbot usage, while official AI adoption remains stagnant.
Artificial intelligence
fromComputerWeekly.com
6 months ago

Proliferation of on-premise GenAI platforms is widening security risks | Computer Weekly

Generative AI adoption surged 50% among enterprises as security risks from unsanctioned shadow AI rise significantly.
Organizations must improve their AI application controls and data loss prevention strategies to manage growing security concerns.
[ Load more ]