AWS Bedrock vs Azure OpenAI / Foundry: enterprise agents
In 2026 most enterprise agent deployments run on AWS Bedrock or Azure OpenAI / AI Foundry rather than direct vendor APIs. The hyperscaler-mediated path simplifies procurement (one MSA covers the model + infrastructure), but it constrains the model selection and changes the residency, security, and audit posture in ways that the direct path does not. This comparison is for IT leaders selecting the agent-platform layer rather than the foundation model itself. The decision often follows the in-house cloud commitment, but the platform features, model breadth, and contractual primitives diverge enough that the cloud commitment alone shouldn't decide it.
Who this is for
- · Enterprise architects choosing the agent-platform layer for 2026 production deployment
- · Procurement leads negotiating committed-spend agreements with AWS or Azure
- · CISOs evaluating shared-responsibility model boundaries for hosted AI inference
AWS Bedrock ↗
AWS-managed agent platform offering frontier and open-weight models from Anthropic, Meta, Mistral, Cohere, AI21, Stability, and Amazon. Bedrock Agents for orchestration; Knowledge Bases for RAG; Guardrails for content controls.
Azure OpenAI / AI Foundry ↗
Microsoft-managed agent platform centred on OpenAI models (GPT-4, GPT-5, o-series) plus selected partner models in AI Foundry. Azure AI Agent Service for orchestration; Microsoft Agent Framework as the agent SDK.
Feature matrix
| Dimension | AWS Bedrock | Azure OpenAI / AI Foundry |
|---|---|---|
| Model breadthsource ↗ | Anthropic Claude family, Meta Llama, Mistral, Cohere, AI21, Amazon Nova, Stability AI | OpenAI GPT/o-series (primary); curated partner models (Llama, Mistral, Phi, Cohere, DeepSeek) in AI Foundry |
| EU residency posturesource ↗ | Frankfurt (eu-central-1), Ireland (eu-west-1), Paris, London regions; AWS European Sovereign Cloud rolling out 2025-2026 | Sweden Central, France Central, Switzerland North, Germany West Central; EU Data Boundary covers most processing |
| Agent orchestration primitivesource ↗ | Bedrock Agents (managed); Step Functions for code-first orchestration | Azure AI Agent Service (managed); Microsoft Agent Framework SDK; Logic Apps for workflow |
| RAG / retrieval primitivesource ↗ | Bedrock Knowledge Bases (managed); OpenSearch / Aurora pgvector for self-managed | Azure AI Search (managed); Cosmos DB pgvector + Postgres for self-managed |
| Guardrails / content filtersource ↗ | Bedrock Guardrails (deny topics, PII redaction, prompt-injection filter, denied-words filter) | Azure AI Content Safety (built-in moderation, prompt shields, groundedness detection) |
| Audit substratesource ↗ | CloudTrail + Bedrock model invocation logs; CloudWatch metrics + alarms | Azure Monitor + Application Insights; Microsoft Purview audit integration |
| Compliance attestations (2026)source ↗ | SOC 1/2/3, ISO 27001/27017/27018/27701/42001, HIPAA, PCI DSS, FedRAMP High, IRAP, C5 | SOC 1/2/3, ISO 27001/27017/27018/27701/42001, HIPAA, PCI DSS, FedRAMP High, IRAP, C5 |
| Marketplace / commitment-spend integrationsource ↗ | AWS Marketplace; Bedrock charges count against EDP / PPA commitments | Azure Marketplace; Azure OpenAI charges count against MACC / EA commitments |
| Hosted Anthropic Claude availabilitysource ↗ | Yes — Sonnet, Opus, Haiku (Anthropic-on-Bedrock is the most common enterprise distribution path for Claude) | No — Anthropic models not available on Azure OpenAI / Foundry |
| Hosted OpenAI availabilitysource ↗ | No — OpenAI models not available on Bedrock | Yes — GPT-4, GPT-4o, GPT-5, o-series (Azure OpenAI is the enterprise distribution channel) |
What our claim ledger says about each
- AM-061· Holding · last review 28 Apr 2026 · next +51dProduction agentic-AI costs at scale routinely run multiples of POC projections, and a layered optimisation programme covering model tiering, vendor prompt caching, batch APIs, context-window discipline, and observability budgeting closes most of the gap.
- AM-108· Holding · last review 29 Apr 2026 · next +52dAgentic-AI data-residency requirements are not cleanly inherited from existing GDPR cross-border transfer practice. Agent context windows, retrieval indexes, and reasoning traces all create new categories of personal-data processing that have to be located, documented, and (for high-risk Annex III deployments) data-resident inside the EEA before EU AI Act Article 16 enforcement opens on 2 August 2026. The deployment topology has to shift to single-region EEA-resident for high-risk systems; hub-and-spoke remains defensible for general-purpose deployments under documented GDPR Chapter V transfer mechanisms.
- AM-136· Holding · last review 5 May 2026 · next +28dAcross the 24-month window May 2024 to April 2026, every major foundation-model provider (Anthropic, OpenAI, Google, AWS Bedrock, Azure OpenAI) experienced at least one multi-hour outage that exceeded the SLA-credit threshold defined in their published terms. The procurement-defensible posture is multi-provider routing with documented failover and hard-dollar incident liability above the standard SLA-credit cap. Three architectural patterns dominate 2026 production deployments: gateway abstraction (LiteLLM, OpenRouter, Portkey), provider-side regional failover (partial mitigation), and explicit multi-provider provisioning at the application layer.
When to choose which
Pick AWS Bedrock when the in-house stack is AWS-resident, when the deployment needs Anthropic Claude (Bedrock is the enterprise distribution channel for Claude), or when model breadth across vendors matters. Stronger fit when the architecture wants vendor-substitution flexibility within a single platform — Bedrock's model-vendor diversity is materially wider than Foundry's.
Pick Azure OpenAI / Foundry when the in-house stack is Azure-resident, when the deployment needs OpenAI's frontier models (GPT-5, o-series), or when the workload integrates with Microsoft Graph / Entra ID natively. Stronger fit when foundation-model commitment to OpenAI is already in place and the vendor lock-in is a deliberate procurement decision.
Open questions we're tracking
- AM-108· next review +52dAgentic-AI data-residency requirements are not cleanly inherited from existing GDPR cross-border transfer practice. Agent context windows, retrieval indexes, and reasoning traces all create new categories of personal-data processing that have to be located, documented, and (for high-risk Annex III deployments) data-resident inside the EEA before EU AI Act Article 16 enforcement opens on 2 August 2026. The deployment topology has to shift to single-region EEA-resident for high-risk systems; hub-and-spoke remains defensible for general-purpose deployments under documented GDPR Chapter V transfer mechanisms.
Related decisions
Adjacent procurement decisions in the same cluster. Use the buyer's-guide structure: pick the deployment shape first, then walk the comparison matrix.
- Anthropic Claude vs OpenAI GPT for enterprise agents
Feature matrix, pricing tiers, and tracked claims for the two most-cited foundation models in 2026 enterprise agent-mode procurement decisions.
- Anthropic Claude vs Google Gemini for enterprise agents
Feature matrix and tracked claims comparing Anthropic Claude vs Google Gemini for 2026 enterprise agent-mode procurement. Pricing dated; sources linked.
- MCP vs A2A: agent-protocol comparison
Model Context Protocol (MCP) vs Agent-to-Agent (A2A) protocol. What each standardises, who maintains them, how they fit in 2026 agent stacks.
Articles citing each
- Anthropic vs OpenAI vs Google vs Microsoft for enterprise agents in 2026
- Data residency for agentic AI: what CIOs must ship before EU AI Act enforcement on 2 August 2026
- Agentic-AI vendor contracts: the six gotchas in 2026 enterprise MSAs that procurement teams routinely miss
- Production agentic AI cost: the layered optimisation playbook for enterprise CFOs
- Foundation-model uptime in 2026: the 24-month outage record across Anthropic, OpenAI, Google, AWS Bedrock, and Azure OpenAI