The Quiet Insider Threat: When agentic AI Becomes a Risk
- Daniel Bertrand

- Feb 23
- 3 min read
AI agents are starting to look a lot like staff members. They read our messages, access our files, and take actions in our systems. That’s powerful—yet risky. Small design choices (like letting agents message each other in free text or auto-approve certain tasks) can turn a harmless assist into a quiet insider threat incident. This first post in a multi-part series explains the risks in plain language and what you can do right now.

What is agentic AI
Agentic AI (also known as AI agents) combines a generative model (e.g., ChatGPT, Claude, and Gemini) with a control mechanism that can engage with this creative capability and act. It can pull reports, draft emails, open tickets, even trigger workflows
There are three broad categories of AI agents:
Simple: retrieves information
Task: takes actions when asked
Advanced: autonomously act based on trigger
Why AI agents change the insider risk picture
If the agent misreads an instruction—or is tricked by a covertly crafted message—it can do the wrong thing very fast, at scale, and with your organisation’s authority.

Three emerging patterns:
The Confused Helper
An agent copies instructions from an email or web page into its plan and acts. If those instructions were planted by a malicious actor (even inside a PDF or a help article), the agent can be steered to the wrong outcome—no malware needed.
Borrowed Authority
A low-access agent (e.g., triage bot) can ask a high-access agent (e.g., finance bot) to “help” with a task. If the hand-off isn’t gated, the high-access agent may run a sensitive action on behalf of the low-access one. To logs, it looks routine.
In human terms: the intern asks the CFO’s assistant to “just run that payroll export”, and they do.
Whisper Codes
Agents that message each other in free text can develop shortcuts. Simple word choices (synonyms, punctuation) can carry hidden meaning between agents. It’s not sci-fi; it’s just coordination. If you aren’t constraining how agents talk, you may not see what they’re really asking for.
What a quiet insider threat incident can look like:
An external email asks for a “quick check” and uses a particular phrasing.
Your triage agent forwards the note (as free text) to the finance agent.
The finance agent interprets it as “run the export and email the file internally”.
Minutes later, a large report moves across the network. Data Loss Prevention pings… after the fact.
No alarms went off earlier because nothing looked obviously wrong: two helpers “collaborated” and did what they’re allowed to do.
Early warning signs:
Big pulls after outside prompts: minutes after reading something external, an agent exports a lot of data.
First-time behaviour: an agent suddenly uses a tool or performs a task it’s never used before.
Agent-to-agent shortcuts: a low-access agent “delegates” to a high-access one, and the very next step is a sensitive action.
After-hours automation spikes: large changes or exports when no one’s around.
DLP hits on routine jobs: sensitive info in files that normally wouldn’t include it.

What leaders can do this week Create a “agent register”: who owns it, what it’s for, and what data it can touch.
Keep privileges narrow. If an agent only needs to read, don’t give it export or delete permissions. Split risky actions into separate, tightly scoped tools.
Require a human click for risky moves. If a request comes from outside—or passes between agents—make sensitive actions (exports, deletes, new admin invites) require approval.
Ban free-text hand-offs. Make agents talk via a structured form (think: a short form with dropdowns, not an open message). Hidden “whisper codes” don’t survive structure.
Log the story, not just the result. Save: who asked, why they asked, what plan the agent proposed, which tool it used, and how many records it touched. You’ll thank yourself during a review.
What you can do:
Executives: treat agents like new hires with badges. Approve them, onboard them, and review them.
Managers: assign owners and add approval steps for anything that moves sensitive data.
Front-line staff: if you see an agent do something surprising, report it—even if it “worked.” That’s a signal your controls need tuning.
IT/Security: broker all agent-to-agent traffic, enforce structured messages, and alert on “outside-to-sensitive” sequences.

This series: what’s coming next
Building an Agent Register: simple templates to track purpose, permissions, and owners.
Stopping “Confused Helper” Incidents: safe ways to handle outside content.
Designing Safe Hand-offs: approvals and structured messages between agents.
Logging That Tells the Story: evidence-ready records for quick investigations.
Red-Team the Robots: safe simulations to test your controls.
Change Management for AI: how to roll out agents responsibly across the business.


Comments