AWS stumbled this morning. Here’s why multi‑cloud keeps you moving.
October 20, 2025 — Early today, Amazon Web Services experienced a major incident centered in its
US‑EAST‑1 (N. Virginia) region. AWS reports the event began around
12:11 a.m. PT and tied back to
DNS resolution affecting DynamoDB, with mitigation within a couple of hours and recovery continuing thereafter.
As the outage rippled, popular services like
Snapchat, Venmo, Ring, Roblox, Fortnite, and even some Amazon properties saw disruptions before recovering.
If your apps or data are anchored to a single cloud, a morning like this can turn into a help‑desk fire drill. A multi‑cloud or cloud‑smart approach helps you ride through these moments with minimal end‑user impact.
What happened (and why it matters)
- Single‑region fragility: US‑EAST‑1 is massive—and when it sneezes, the internet catches a cold. Incidents here have a history of wide blast radius.
- Shared dependencies: DNS issues to core services (like DynamoDB endpoints) can cascade across workloads that never directly “touch” that service.
Multi‑cloud: practical resilience, not buzzwords
For mid‑sized orgs, schools, and local government, multi‑cloud doesn’t have to mean “every app in every cloud.” It means thoughtful redundancy where it counts:
- Multi‑region or multi‑provider failover for critical apps
Run active/standby across AWS and Azure (or another provider), or at least across two AWS regions with automated failover. Start with citizen‑facing portals, SIS/LMS access, emergency comms, and payment gateways. - Portable platforms
Use Kubernetes and containers, keep state externalized, and standardize infra with Terraform/Ansible so you can redeploy fast when a region (or a provider) wobbles. (Today’s DNS hiccup is exactly the kind of scenario this protects against.) - Resilient data layers
Replicate data asynchronously across clouds/regions; choose databases with cross‑region failover and test RPO/RTO quarterly. If you rely on a managed database tied to one region, design an escape hatch. - Traffic and identity that float
Use global traffic managers/DNS to shift users automatically; keep identity (MFA/SSO) highly available and not hard‑wired to a single provider’s control plane. - Run the playbook
Document health checks, automated cutover, and comms templates. Then practice—tabletops and live failovers. Many services today recovered within hours, but only teams with rehearsed playbooks avoided user‑visible downtime.
The bottom line
Cloud concentration risk is real. Outages will happen—what matters is whether your constituents, students, and staff feel it. A pragmatic multi‑cloud stance limits the blast radius and keeps your mission‑critical services online when one provider has a bad day.