CISA, NSA and Five Eyes Allies Release First Joint Agentic AI Security Guidance — "Careful Adoption" Lands May 1, 2026
Six Five Eyes cyber agencies — CISA, NSA, ASD ACSC, CCCS, NCSC-NZ, and NCSC-UK — released their first joint guidance on securing agentic AI on May 1, 2026. The 30-page "Careful Adoption" document warns that AI agents are already inside critical infrastructure with far more privilege than organizations can safely monitor.
On , six Five Eyes cybersecurity agencies — CISA, NSA, Australia's ASD ACSC, the Canadian Centre for Cyber Security, New Zealand's NCSC, and the UK's NCSC — jointly released a 30-page guidance document titled "Careful Adoption of Agentic AI Services", the first coordinated multi-government framework for securing agentic AI systems already deployed inside critical infrastructure.
What Happened
The publication landed on CISA's news feed on Friday and was rapidly amplified across allied cyber agencies the same day. Unlike earlier AI guidance from these agencies — which focused on training data poisoning and model integrity — the new document specifically addresses systems that take autonomous, real-world actions on networks: scheduling jobs, calling APIs, modifying files, and executing transactions on behalf of users. The agencies note that such agents are already deployed across federal, financial, and industrial environments, and that "most organizations are granting them far more access than they can safely monitor or control."
The headline conclusion is deliberately reassuring on one point and alarming on another: agentic AI does not require a new security discipline, but the existing controls have to be applied with much more rigor than most organizations are doing today. The agencies recommend folding agent deployments into the same zero-trust, defense-in-depth, and least-privilege governance frameworks already used for human users and traditional services — and explicitly rejecting the "give the agent admin to make it work" anti-pattern that has shown up in early enterprise rollouts.
Key Details
- Five risk categories — The guide groups agentic AI risk into privilege (over-permissioned agents), design and configuration flaws, behavioral risks (agents pursuing goals in unintended ways), structural risks (failures cascading across networks of agents), and accountability (logs that are hard to parse and decisions that are hard to audit).
- 30 pages, six agencies — Co-sealed by CISA, NSA, ASD ACSC (Australia), CCCS (Canada), NCSC-NZ, and NCSC-UK. It is the first joint multi-government publication on agentic AI specifically.
- Concrete recommendations — Inventory all agents and the credentials they hold, scope each agent to the smallest set of tools and data needed, monitor agent actions in the same SIEM as human user activity, and require human approval gates for irreversible operations (financial transactions, data deletion, infrastructure changes).
- "Already inside" framing — The document explicitly warns that agents capable of taking real-world actions on networks are already in production at critical-infrastructure operators, including in healthcare, energy, and finance — making this guidance retroactive rather than preventive for many readers.
- Pentagon parallel — The release coincided with the Department of Defense signing seven AI labs (notably excluding Anthropic) to a generative-AI procurement vehicle, sharpening the practical relevance of the guidance for defense contractors.
What Developers and Users Are Saying
Reaction on the Hacker News thread covering The Register's writeup was largely supportive of the framing but skeptical of enforceability. The top comment praised the document for naming the over-permissioning problem directly, while several replies noted that vendors of agentic platforms — including major LLM providers — currently default to broad access scopes that contradict the guide's least-privilege recommendation. On Reddit's r/cybersecurity, security architects flagged the structural-risk category as the hardest to address with current tooling: there is no widely deployed equivalent of dependency scanners or SIEM correlation rules for "agent calling agent calling agent" chains. The Register's headline framing — "Five Eyes warn agentic AI is too dangerous for rapid rollout" — was disputed by the Cloud Security Alliance research team, which argued the actual document is materially calmer than that read suggests.
What This Means for Developers
If you ship agentic features in production today — whether your own agents, an OpenAI or Anthropic SDK integration, an MCP server, or a no-code automation that hands an LLM API keys — the guide effectively becomes a checklist auditors and procurement teams will start citing this quarter. The most actionable items: (1) eliminate *-scoped API tokens issued to agents, (2) put irreversible actions behind explicit human-in-the-loop confirmation, (3) log every tool call to a tamper-evident store, and (4) build a kill switch that halts all agents with a single action. Expect enterprise RFPs to start asking these questions within weeks, particularly in regulated industries.
What's Next
CISA has indicated follow-on technical guidance is in development, focused on red-teaming agentic systems and standardized agent telemetry schemas. The Cloud Security Alliance is publishing an implementation companion document, and several AI gateway vendors — including Portkey (now being acquired by Palo Alto Networks) and Cloudflare AI Gateway — have already announced features aligned to the guide's recommendations. The full PDF is hosted on cisa.gov under the "Careful Adoption of Agentic AI Services" resource page.
Sources
- CISA news release — primary source from the lead authoring agency
- "Careful Adoption of Agentic AI Services" resource page — the 30-page PDF and supporting materials
- The Register — independent reporting and "too dangerous for rapid rollout" framing
- CyberScoop — DC-focused coverage and policy context
- Cloud Security Alliance research note — practitioner-oriented analysis
- Lyrie Research analysis — governance-focused breakdown
Stay up to date with Doolpa
Subscribe to Newsletter →