The full Agent Mode AI catalog.
Every tool, framework, lead-magnet Excel, data endpoint, and reference page this publication produces that isn't an article. Everything here is free. Items that gate on email are labelled; the rest is open. For articles, use the article archive.
Practitioner tools
Downloadable templates with the analysis behind them. Each tool ships as Markdown + CSV; each carries an RES-NNN claim tracked on a 60–90 day cycle on the Holding-up ledger.
Compliance
The Works Council AI Notification Packet — BetrVG, WOR, CSE, and the EU AI Act overlay →
v1.060–90 min to adapt to your AI deploymentEmail-gatedA jurisdictional packet for notifying employee representatives before deploying AI that touches employees: BetrVG §87(1) point 6 in Germany, WOR Article 27 in the Netherlands, CSE consultation in France, plus the EU AI Act Article 26(7) overlay that adds notification to deployer obligations from 2 August 2026. Sized for early-engagement, not late-stage retrofit.
ForHR leads, legal counsel, project managers deploying AI in EU jurisdictionsRES-004 holdingreviewed 2d ago
The AI Data Protection Impact Assessment template — Article 35 + EU AI Act overlay →
v1.03–6 hours per AI deploymentEmail-gatedA pre-deployment DPIA template specific to AI and agentic AI systems. Covers GDPR Article 35 obligations, the Datenschutzkonferenz Muss-Liste triggers, and the EU AI Act Article 26 deployer documentation that arrives 2 August 2026. Sized to be completed in one working session.
ForData Protection Officers, compliance leads, project managersRES-002 holdingreviewed 2d ago
Procurement
The AI Vendor Security Questionnaire — 47 questions a CAIQ doesn't ask →
v1.02–4 hours per vendorEmail-gatedA 47-question due-diligence pack for enterprise AI and agentic AI vendors. Covers the seven failure surfaces a CAIQ v4 or SIG questionnaire does not address: model, training data, inference handling, non-human identity, audit logs, kill-switch, and EU AI Act posture.
ForCISOs, procurement leads, vendor risk managersRES-001 holdingreviewed 2d ago
The AI MSA Red-Team Checklist — what to look for in vendor enterprise contracts →
v1.03–5 hours per vendor MSAEmail-gatedA 38-item checklist for reviewing AI vendor Master Service Agreements, Data Processing Addenda, and AI-specific addenda. Covers the seven clause families where 2025-2026 enterprise AI MSAs cluster their failure modes: training-data carve-outs, output ownership, indemnification scope, model-deprecation rights, sub-processor expansion, kill-switch operability, and exit-data portability. Built for procurement and legal counsel reviewing real vendor paper.
ForProcurement leads, legal counsel, vendor risk managersRES-005 holdingreviewed 2d ago
Operations
The Agent Incident Runbook — detect, contain, roll back, post-mortem →
v1.060–90 min to baseline against your stackEmail-gatedA four-phase runbook for agent-mode AI incidents: detection within 4 hours, containment within 30 seconds of confirmed harm, rollback procedures for the seven action classes agents typically take, and a structured post-mortem template aligned to MTTD-for-Agents. Built for SRE teams who already run a standard incident response process and need the AI overlay.
ForSRE leads, platform engineering, security operationsRES-003 holdingreviewed 2d ago
Frameworks
Scored methodologies. Public rubrics, public amendment logs, public corrections.
GAUGE — Enterprise Agentic Governance Benchmark →
Methodology pageSix-dimension 0–100 scored benchmark for enterprise agent-mode deployments. Published annually starting Q4 2026 as the GAUGE Index.
ForCIOs, CTOs, heads of platform
MTTD-for-Agents →
Methodology pageMean Time To Detect adapted from SRE to enterprise agentic AI. Four tripwires, five-phase detection chain, published 4h / 24h targets.
ForCISOs, SRE, platform security
GAUGE amendment log →
TimelineEvery GAUGE rubric change dated and rationale-logged. Named versions for reproducible citation.
ForAnalysts citing GAUGE scores
MTTD amendment log →
TimelineEvery MTTD threshold, phase, and definition change dated with rationale.
ForAnalysts citing MTTD targets
GAUGE 2026 Index preview + nominee register →
Live registerThe annual Index publishes Q4 2026. Nominees, evidence trails, and scoring process are public before publication.
ForProcurement, analysts, IR teams
Lead-magnet Excels
Working-document spreadsheets for governance groups. Free, email-gated download.
GAUGE self-scoring diagnostic →
Excel, 5 sheets30–45 min for a working groupEmail-gatedFull GAUGE rubric as a working Excel. Three calibration examples, multi-agent comparison template, 90-day intervention plan.
ForGovernance committees
MTTD tripwire starter kit →
Excel + YAML45–90 min to baselineEmail-gatedFour tripwires pre-configured as Excel detection chains. Exportable YAML for your SIEM/SOAR.
ForSecurity engineering
Enterprise Agentic AI RFP — 60 questions →
Excel, 5 sheets2–4 hours to adapt to your RFPEmail-gated60 RFP questions mapped to the six GAUGE dimensions with evidence prompts per question. Vendor-agnostic.
ForProcurement, category leads
CFO business case & ROI calculator →
Excel, 5 sheets60–90 min per deploymentEmail-gatedAudit-survivable ROI model with named baselines, documented methodology, and quarterly-review template.
ForCFOs, finance business partners
Interactive tools
Browser-native tools. No download. Most stay inside the 5-minute budget.
GAUGE 5-minute diagnostic (web) →
Web form5 minEmail-gatedSix dimension sliders. Live 0–100 score + band preview. Email-gated full per-dimension breakdown.
ForIndividual first-pass
Playbook quiz — where to start →
Web quiz4 minFive questions. Recommends the framework, tool, and article that fit your role and stage.
ForAnyone landing cold
AI readiness checklist →
Web checklist15–25 minStructured self-assessment across people, process, and platform readiness for enterprise AI adoption.
ForMid-market + enterprise
GPT-5 ROI calculator →
Web calculator10 minInput your baseline; the formula returns a defensible ROI number with named assumptions.
ForFinance, business cases
Implementation roadmap →
Web guide20–40 min to map to your orgPhased plan from pilot to production for enterprise agentic AI deployments.
ForIT leaders sequencing rollouts
Data & machine-readable
Endpoints and feeds. For data pipelines, research agents, aggregators.
facts.json — combined claim ledger →
application/jsonSingle JSON feed covering both registers: Holding-up (our own claims) + Claim Archive (claims by others). Verdicts, snapshots, review history.
ForPerplexity / ChatGPT research agents
Claim Archive — Dataset JSON-LD →
application/ld+jsonschema.org/Dataset metadata for the archive: identifier, distribution, measurementTechnique, citation format.
ForGoogle Dataset Search, OpenAIRE
Claim Archive — current.json →
application/jsonEvery archived claim + review memo as one JSON document. Stable URL, hourly refresh.
ForData pipelines
Claim Archive — current.csv →
text/csvCSV sibling of current.json, same schema, one row per record.
ForSpreadsheet + BI workflows
Archive insights →
Visual dashboardVerdict distribution, source-type balance, domain-tag concentration, review-cadence split, activity heatmap, status-change timeline.
ForEditorial calibration, analysts
Quarterly data bundles →
Excel + JSONFrozen CSV + JSON snapshots per quarter. Begins Q2 2026 for archival citation.
ForArchival citation, research
llms.txt →
text/plainMap of every machine-readable surface this publication exposes, written for LLM crawlers.
ForAI-search crawlers
Reference pages
The instrument panel of the publication — how it's structured, how it behaves.
Holding-up index →
Filterable tableEvery claim this publication has made, each tracked on a 30–90 day review cycle with a public verdict.
ForReaders verifying what's held
Claim Archive →
Filterable tablePublic register of claims made by vendors, analysts, academics, publications, regulators, and identified consensus positions.
ForJournalists, analysts, procurement
Submit a claim candidate →
Public formSpot a vendor announcement, analyst prediction, or regulator statement worth archiving? Submit it here; triaged against the significance bar.
ForAnyone who reads the industry
Editorial standards →
Policy pageSourcing rules, review cadence, correction policy, retraction criteria. The operating bar.
ForPublication readers
Claude writes; Peter signs off. Production model, disclosure, what to trust and why.
ForCalibration on what to trust
Anything missing? Every resource above is maintained on a review cadence; if something you expected isn't here, file a correction. If you want to know how to cite the archive or any framework, see the archive citation block or the per-claim generator on each claim page.