Agentic AI Amplifies and Creates Insider Risks
Corporate security teams are revising insider risk practices as AI tools spread across workplaces. Recent industry findings show widespread incidents and rising concern about nonhuman identities.
Scope of the threat
One report found that 90% of organizations had an insider threat incident in the previous year. A separate survey showed 94% of security professionals expect AI will increase insider risks.
Analysis from the Ponemon Institute attributed about three-quarters of insider events to nonmalicious causes. Negligence or error accounted for 53%, compromised or manipulated users for 20%, and 27% stemmed from malicious intent.
Shadow AI and data leakage
Unapproved AI use, often called shadow AI, is now common. A Netskope study reported 47% of employees use personal GenAI accounts at work.
Employees cite familiarity, lack of enterprise tools, productivity gains, and ease of use as reasons. Mimecast’s chief product officer, Rob Juncker, noted many organizations already run unsanctioned AI tools.
Data leakage from AI inputs is a major issue. Harmonic Security found 4.37% of prompts and 22% of uploaded files to generative AI systems contained confidential corporate data. Juncker gave a daily-example calculation for a 100-user organization sending 20 prompts each.
Practical consequences
Without governance, AI outputs can be biased or hallucinated. That harms projects and may trigger regulatory breaches. Unsanctioned tools often store prompts and files outside corporate control.
AI-enabled phishing and social engineering
Attackers now use AI to craft highly convincing scams. Language models can remove traditional phishing errors, producing polished emails and messages.
Ira Winkler, field CISO at Aisle, warned that AI improves spear-phishing and deepfake tactics. One known incident involved a fraudulent voice impersonation that led to a $25 million transfer at engineering firm Arup.
Agentic AI as a new class of insider
AI agents act on behalf of users and therefore count as identities to manage. Threat actors target these agents through prompt injection and other manipulations.
Juncker described a malicious email that tried to trick AI tooling into exfiltrating sensitive information. Overprivileged agents have also caused severe data exposure in real deployments.
For example, a marketing automation deployment was granted broad access. The agents then misrouted customer data, scraped competitors, and leaked confidential information. Another case involved an employee-created agent that crawled and synced an entire OneDrive. That agent continued running after the employee left, because permissions remained active. Security teams detected the activity only after spotting elevated API calls and token use.
Mitigation strategies discussed at RSAC 2026
Speakers at RSAC 2026 outlined practical controls. Their recommendations span policy, identity, monitoring, and tooling.
Policy and governance
- Create acceptable use and AI security policies. List approved tools and require employee acknowledgement.
- Use checks and balances for high-risk actions. Manual approvals can stop large fraudulent transfers.
- Prompt discovery and governance help reduce shadow AI proliferation.
Education and awareness
Train staff on AI-specific risks. Teach detection of social engineering, deepfakes, and vishing.
A KnowBe4 survey found only 18.5% of employees were aware of a corporate AI policy. Regular awareness programs can close that gap.
Identity and access management
Treat AI agents like human users. Add them to identity and access programs.
Apply least-privilege controls, just-enough-access, and just-in-time elevation. Limit data exposure by default.
Visibility and monitoring
Perform shadow AI discovery and monitor prompt and file flows. Detect overprivileged accounts and anomalous agent behavior.
When suspicious activity appears, teams should throttle or suspend the identity while investigating.
AI-enabled defensive tooling
- Use AI-driven vulnerability management and domain takedown services.
- Deploy spam filters, antimalware, and deepfake detection tools.
- Integrate AI into endpoint detection, DLP, and data security posture management.
Conclusion
Agentic AI amplifies traditional insider risks and introduces new identity challenges. Organizations must expand risk programs to include AI agents.
Filmogaz.com advises firms to combine clear policy, employee training, identity controls, continuous monitoring, and AI-aware security tools. Done correctly, AI can boost productivity and reinforce security instead of undermining it.