Skip to content
Topic pillar · 48 tracked pieces

Topic · Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment.

Where the moat is. Most enterprise agentic-AI failure happens here, not at the model layer.

Governance is the topic this publication writes about most because governance is where most enterprise agentic-AI deployments fail. Not at the model layer — Anthropic, OpenAI, Google, and Microsoft all ship competent base models. Failure happens at the policy layer, the identity layer, the audit layer, the procurement layer. The pieces that survive review on this site are the ones that name a specific governance gap and a specific evidence trail.

The pillar covers six recurring threads. Frameworks the enterprise actually applies — GAUGE (the Enterprise Agentic Governance Benchmark), MTTD-for-Agents (Mean Time To Detect adapted from SRE), NIST AI RMF mappings. Vendor governance posture audits — what Salesforce Agentforce, Microsoft Copilot Studio, Google Vertex Agent Builder, and ServiceNow Now Assist actually disclose versus what their marketing claims. Regulatory translation — EU AI Act timelines, NIS2 and DORA implications for agentic systems, and what specific articles like §12 audit-evidence requirements mean operationally.

Centralized vs federated governance models — when each works, when each breaks. The Head-of-AI-Governance role specification — why the role exists, what reporting line it requires, what its first 90 days look like. Multi-agent architecture risks — A2A protocol behaviour, cross-agent prompt injection of the EchoLeak class, agent-to-agent credential leakage.

What we won't publish: governance frameworks invented for the byline. Every framework cited here resolves to a primary source — a regulator, a peer-reviewed paper, an analyst firm with disclosed methodology, or one of the two house frameworks (GAUGE / MTTD) whose amendment logs are public.

Pillar last refreshed 2026-05-01

What survives review

What has broken

Spoke articles

What we're watching next

  • EU AI Act enforcement letters from member-state competent authorities after the 2 August 2026 deadline.The first wave of enforcement actions will surface what the Act actually polices in practice. Several pillar articles assume an aggressive enforcement posture; others assume a permissive one. The early letters will sharpen both.
  • EDPB guidance specific to agentic AI context windows and reasoning traces.The board has published on AI generally but not on agent-specific data flows. Targeted agent guidance is the most likely 2026-2027 development that would shift the data-residency analysis from speculative to settled.
  • ISO/IEC 42001 certification audit reports becoming public.Once enterprises start publishing ISO 42001 certificates as procurement signals, the gap between the standard's text and what audits actually require becomes visible. Pillar pieces on certification effort estimates will need updating.
  • Major frontier-vendor governance disclosures shifting on Responsible Scaling / Preparedness frameworks.Anthropic and OpenAI publish capability evaluations; Google and Microsoft are expected to follow. Disclosure cadence becoming a vendor-comparison axis would change the procurement playbook.

Primary sources we trust for this topic

A curated list of primary research, regulator guidance, and vendor documentation for agentic ai governance. Populated on the quarterly refresh — not a link dump, not competitors.


This pillar page is refreshed quarterly. Last refresh: 19 Apr 2026. Next refresh: 18 Jul 2026.

Vigil · 40 reviewed