Skip to content
Resources

The full Agent Mode AI catalog.

Every tool, framework, lead-magnet Excel, data endpoint, and reference page this publication produces that isn't an article. Everything here is free. Items that gate on email are labelled; the rest is open. For articles, use the article archive.

Start here
New here
Take the 4-min Playbook quiz →
5 questions; routes you to the right starting article for your role.
Skeptical
Read how this is written →
The AI-authorship method, the review cadence, the failure log.
Analyst / data
llms.txt + facts.json →
Machine-readable surfaces. Start at llms.txt for the index.

Practitioner tools

Downloadable templates with the analysis behind them. Each tool ships as Markdown + CSV; each carries an RES-NNN claim tracked on a 60–90 day cycle on the Holding-up ledger.

Compliance

  1. A jurisdictional packet for notifying employee representatives before deploying AI that touches employees: BetrVG §87(1) point 6 in Germany, WOR Article 27 in the Netherlands, CSE consultation in France, plus the EU AI Act Article 26(7) overlay that adds notification to deployer obligations from 2 August 2026. Sized for early-engagement, not late-stage retrofit.

    ForHR leads, legal counsel, project managers deploying AI in EU jurisdictionsRES-004 holdingreviewed 2d ago

  2. A pre-deployment DPIA template specific to AI and agentic AI systems. Covers GDPR Article 35 obligations, the Datenschutzkonferenz Muss-Liste triggers, and the EU AI Act Article 26 deployer documentation that arrives 2 August 2026. Sized to be completed in one working session.

    ForData Protection Officers, compliance leads, project managersRES-002 holdingreviewed 2d ago

Procurement

  1. A 47-question due-diligence pack for enterprise AI and agentic AI vendors. Covers the seven failure surfaces a CAIQ v4 or SIG questionnaire does not address: model, training data, inference handling, non-human identity, audit logs, kill-switch, and EU AI Act posture.

    ForCISOs, procurement leads, vendor risk managersRES-001 holdingreviewed 2d ago

  2. A 38-item checklist for reviewing AI vendor Master Service Agreements, Data Processing Addenda, and AI-specific addenda. Covers the seven clause families where 2025-2026 enterprise AI MSAs cluster their failure modes: training-data carve-outs, output ownership, indemnification scope, model-deprecation rights, sub-processor expansion, kill-switch operability, and exit-data portability. Built for procurement and legal counsel reviewing real vendor paper.

    ForProcurement leads, legal counsel, vendor risk managersRES-005 holdingreviewed 2d ago

Operations

  1. The Agent Incident Runbook — detect, contain, roll back, post-mortem

    v1.060–90 min to baseline against your stackEmail-gated

    A four-phase runbook for agent-mode AI incidents: detection within 4 hours, containment within 30 seconds of confirmed harm, rollback procedures for the seven action classes agents typically take, and a structured post-mortem template aligned to MTTD-for-Agents. Built for SRE teams who already run a standard incident response process and need the AI overlay.

    ForSRE leads, platform engineering, security operationsRES-003 holdingreviewed 2d ago

Frameworks

Scored methodologies. Public rubrics, public amendment logs, public corrections.

  1. Six-dimension 0–100 scored benchmark for enterprise agent-mode deployments. Published annually starting Q4 2026 as the GAUGE Index.

    ForCIOs, CTOs, heads of platform

  2. MTTD-for-Agents

    Methodology page

    Mean Time To Detect adapted from SRE to enterprise agentic AI. Four tripwires, five-phase detection chain, published 4h / 24h targets.

    ForCISOs, SRE, platform security

  3. Every GAUGE rubric change dated and rationale-logged. Named versions for reproducible citation.

    ForAnalysts citing GAUGE scores

  4. Every MTTD threshold, phase, and definition change dated with rationale.

    ForAnalysts citing MTTD targets

  5. The annual Index publishes Q4 2026. Nominees, evidence trails, and scoring process are public before publication.

    ForProcurement, analysts, IR teams

Lead-magnet Excels

Working-document spreadsheets for governance groups. Free, email-gated download.

  1. GAUGE self-scoring diagnostic

    Excel, 5 sheets30–45 min for a working groupEmail-gated

    Full GAUGE rubric as a working Excel. Three calibration examples, multi-agent comparison template, 90-day intervention plan.

    ForGovernance committees

  2. MTTD tripwire starter kit

    Excel + YAML45–90 min to baselineEmail-gated

    Four tripwires pre-configured as Excel detection chains. Exportable YAML for your SIEM/SOAR.

    ForSecurity engineering

  3. Enterprise Agentic AI RFP — 60 questions

    Excel, 5 sheets2–4 hours to adapt to your RFPEmail-gated

    60 RFP questions mapped to the six GAUGE dimensions with evidence prompts per question. Vendor-agnostic.

    ForProcurement, category leads

  4. CFO business case & ROI calculator

    Excel, 5 sheets60–90 min per deploymentEmail-gated

    Audit-survivable ROI model with named baselines, documented methodology, and quarterly-review template.

    ForCFOs, finance business partners

Interactive tools

Browser-native tools. No download. Most stay inside the 5-minute budget.

  1. GAUGE 5-minute diagnostic (web)

    Web form5 minEmail-gated

    Six dimension sliders. Live 0–100 score + band preview. Email-gated full per-dimension breakdown.

    ForIndividual first-pass

  2. Five questions. Recommends the framework, tool, and article that fit your role and stage.

    ForAnyone landing cold

  3. AI readiness checklist

    Web checklist15–25 min

    Structured self-assessment across people, process, and platform readiness for enterprise AI adoption.

    ForMid-market + enterprise

  4. GPT-5 ROI calculator

    Web calculator10 min

    Input your baseline; the formula returns a defensible ROI number with named assumptions.

    ForFinance, business cases

  5. Implementation roadmap

    Web guide20–40 min to map to your org

    Phased plan from pilot to production for enterprise agentic AI deployments.

    ForIT leaders sequencing rollouts

Data & machine-readable

Endpoints and feeds. For data pipelines, research agents, aggregators.

  1. Single JSON feed covering both registers: Holding-up (our own claims) + Claim Archive (claims by others). Verdicts, snapshots, review history.

    ForPerplexity / ChatGPT research agents

  2. schema.org/Dataset metadata for the archive: identifier, distribution, measurementTechnique, citation format.

    ForGoogle Dataset Search, OpenAIRE

  3. Every archived claim + review memo as one JSON document. Stable URL, hourly refresh.

    ForData pipelines

  4. CSV sibling of current.json, same schema, one row per record.

    ForSpreadsheet + BI workflows

  5. Archive insights

    Visual dashboard

    Verdict distribution, source-type balance, domain-tag concentration, review-cadence split, activity heatmap, status-change timeline.

    ForEditorial calibration, analysts

  6. Frozen CSV + JSON snapshots per quarter. Begins Q2 2026 for archival citation.

    ForArchival citation, research

  7. llms.txt

    text/plain

    Map of every machine-readable surface this publication exposes, written for LLM crawlers.

    ForAI-search crawlers

Reference pages

The instrument panel of the publication — how it's structured, how it behaves.

  1. Holding-up index

    Filterable table

    Every claim this publication has made, each tracked on a 30–90 day review cycle with a public verdict.

    ForReaders verifying what's held

  2. Claim Archive

    Filterable table

    Public register of claims made by vendors, analysts, academics, publications, regulators, and identified consensus positions.

    ForJournalists, analysts, procurement

  3. Spot a vendor announcement, analyst prediction, or regulator statement worth archiving? Submit it here; triaged against the significance bar.

    ForAnyone who reads the industry

  4. Sourcing rules, review cadence, correction policy, retraction criteria. The operating bar.

    ForPublication readers

  5. Claude writes; Peter signs off. Production model, disclosure, what to trust and why.

    ForCalibration on what to trust

Anything missing? Every resource above is maintained on a review cadence; if something you expected isn't here, file a correction. If you want to know how to cite the archive or any framework, see the archive citation block or the per-claim generator on each claim page.

Vigil · 40 reviewed