top of page
Search

The Quiet Insider Threat: When agentic AI Becomes a Risk

AI agents are starting to look a lot like staff members. They read our messages, access our files, and take actions in our systems. That’s powerful—yet risky. Small design choices (like letting agents message each other in free text or auto-approve certain tasks) can turn a harmless assist into a quiet insider threat incident. This first post in a multi-part series explains the risks in plain language and what you can do right now.



What is agentic AI

Agentic AI (also known as AI agents) combines a generative model (e.g., ChatGPT, Claude, and Gemini) with a control mechanism that can engage with this creative capability and act. It can pull reports, draft emails, open tickets, even trigger workflows


There are three broad categories of AI agents:

  • Simple: retrieves information

  • Task: takes actions when asked

  • Advanced: autonomously act based on trigger


Why AI agents change the insider risk picture

If the agent misreads an instruction—or is tricked by a covertly crafted message—it can do the wrong thing very fast, at scale, and with your organisation’s authority.



Three emerging patterns:


  1. The Confused Helper

An agent copies instructions from an email or web page into its plan and acts. If those instructions were planted by a malicious actor (even inside a PDF or a help article), the agent can be steered to the wrong outcome—no malware needed.


  1. Borrowed Authority

A low-access agent (e.g., triage bot) can ask a high-access agent (e.g., finance bot) to “help” with a task. If the hand-off isn’t gated, the high-access agent may run a sensitive action on behalf of the low-access one. To logs, it looks routine.

In human terms: the intern asks the CFO’s assistant to “just run that payroll export”, and they do.


  1. Whisper Codes

Agents that message each other in free text can develop shortcuts. Simple word choices (synonyms, punctuation) can carry hidden meaning between agents. It’s not sci-fi; it’s just coordination. If you aren’t constraining how agents talk, you may not see what they’re really asking for.


What a quiet insider threat incident can look like:

  • An external email asks for a “quick check” and uses a particular phrasing.

  • Your triage agent forwards the note (as free text) to the finance agent.

  • The finance agent interprets it as “run the export and email the file internally”.

  • Minutes later, a large report moves across the network. Data Loss Prevention pings… after the fact.


No alarms went off earlier because nothing looked obviously wrong: two helpers “collaborated” and did what they’re allowed to do.


Early warning signs: 

  • Big pulls after outside prompts: minutes after reading something external, an agent exports a lot of data.

  • First-time behaviour: an agent suddenly uses a tool or performs a task it’s never used before.

  • Agent-to-agent shortcuts: a low-access agent “delegates” to a high-access one, and the very next step is a sensitive action.

  • After-hours automation spikes: large changes or exports when no one’s around.

  • DLP hits on routine jobs: sensitive info in files that normally wouldn’t include it.



What leaders can do this week Create a “agent register”: who owns it, what it’s for, and what data it can touch.

  • Keep privileges narrow. If an agent only needs to read, don’t give it export or delete permissions. Split risky actions into separate, tightly scoped tools.

  • Require a human click for risky moves. If a request comes from outside—or passes between agents—make sensitive actions (exports, deletes, new admin invites) require approval.

  • Ban free-text hand-offs. Make agents talk via a structured form (think: a short form with dropdowns, not an open message). Hidden “whisper codes” don’t survive structure.

  • Log the story, not just the result. Save: who asked, why they asked, what plan the agent proposed, which tool it used, and how many records it touched. You’ll thank yourself during a review.


What you can do:

  • Executives: treat agents like new hires with badges. Approve them, onboard them, and review them.

  • Managers: assign owners and add approval steps for anything that moves sensitive data.

  • Front-line staff: if you see an agent do something surprising, report it—even if it “worked.” That’s a signal your controls need tuning.

  • IT/Security: broker all agent-to-agent traffic, enforce structured messages, and alert on “outside-to-sensitive” sequences.



This series: what’s coming next

  1. Building an Agent Register: simple templates to track purpose, permissions, and owners.

  2. Stopping “Confused Helper” Incidents: safe ways to handle outside content.

  3. Designing Safe Hand-offs: approvals and structured messages between agents.

  4. Logging That Tells the Story: evidence-ready records for quick investigations.

  5. Red-Team the Robots: safe simulations to test your controls.

  6. Change Management for AI: how to roll out agents responsibly across the business.

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

Become a sponsor

The benefits of sponsorship include research into an insider risk management issue relevant to your organization and developing the risk mitigation practitioners and researchers of tomorrow.

¹Our founding partners provide the CInRM CoE with dedicated annual funding to support our operations and research initiatives, in addition to being strategic advisors in establishing the wider Canadian community of practice.

²Our Tier 1 partners provide the CInRM CoE with dedicated annual funding to support our operations and research initiatives, in addition to being active collaborators on our key initiatives to develop cross-industry capabilities for the wider Canadian community of practice.

³Our Tier 2 partners provide the CInRM CoE with dedicated annual funding to support our operations and research initiatives.

⁴Our partners provide the CInRM CoE with ad-hoc:
a) facilitation of dialogue with industry stakeholders;
b) fostering awareness of the CInRM CoE;
c) in-kind support; and/or,
d) sponsorship.

⁵The Federal Advisory Committee provides support and guidance to the CInRM CoE's operations concerning:

a) academic research initiatives;

b) program development; and,

c) operations;

to enhance the quality of the CInRM CoE and promote best practices in Canadian InRM.

*The CInRM CoE encourages diverse opinions concerning the mitigation of insider threats and the fostering of critical discourse.  Points-of-view (POV) represent the perspectives of our occasional contributors and may not be representative of the CInRM CoE.

Desk

Subscribe to Our Newsletter

Thanks for submitting!

Follow Us On:

  • LinkedIn

© 2026 by Canadian Insider Risk Management Centre of Excellence | Centre d'excellence canadien pour la gestion des risques internes

bottom of page