All incidents in one place
This page holds the full atlas record. The homepage is selective. The timeline is for scanning by date. The archive is where every incident stays available in detail.
If you arrive from the timeline, the matching record opens automatically.
When bad maps become public reality
The service may still exist. The route to it is what disappears, leaks, or gets replaced by something false.
AS7007 Route Leak
High Severity Open record
A misconfigured router at MAI Network Services originated a massive set of more-specific routes and polluted routing tables across the internet.
The event became an early proof that one broken routing announcement could destabilize far more than the network that sent it.
Routing failures · BGP leak
- Internet Outage Atlas · Full Merged Research Report
Pakistan Telecom Blacks Out YouTube for the World
Critical Open record
Pakistan tried to block YouTube at home.
The route escaped, spread, and briefly blacked out YouTube for everyone else. A local censorship action became a global routing fact.
Routing failures · BGP Route Hijack
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Indosat BGP Hijack
High Severity Open record
An Indonesian provider briefly announced routes for large portions of the internet, diverting traffic that had nothing to do with it.
The incident showed how an operational mistake in one network can distort global reachability in minutes.
Routing failures · Route hijack
- Internet Outage Atlas · Full Merged Research Report
AxcelX and AWS Route Leak
High Severity Open record
Routes connected to AWS address space leaked outward and disrupted access to major sites and services.
The fault was not in application code. It was in the routing layer that decides where traffic goes at all.
Routing failures · Route leak
- Internet Outage Atlas · Full Merged Research Report
MainOne and Google Route Leak
High Severity Open record
A route leak involving MainOne and China Telecom redirected traffic for Google and other large services through unexpected paths.
It was a sharp demonstration of how brittle inter-network trust still is.
Routing failures · Route leak
- Internet Outage Atlas · Full Merged Research Report
Verizon Route Leak Disrupts 15 Percent of Global Internet Traffic
Critical Open record
A small provider leaked routes it did not own.
Verizon accepted them and propagated them. Cloudflare, Facebook, Google, and much more of the network got pulled into the mistake.
Routing failures · BGP Route Leak
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
CenturyLink / Level 3 Backbone Outage
High Severity Open record
A bad Flowspec rule was supposed to block abuse.
Instead it blocked BGP itself across one of the internet's largest backbones. Routers crashed, restarted, received the same bad rule, and crashed again. The network started rejecting the information needed to fix the network.
Routing failures · BGP / ISP
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Facebook Global Blackout
Critical Open record
One backbone maintenance change withdrew Meta's BGP routes.
Facebook, Instagram, WhatsApp, and Messenger vanished at the same time. The harder part came next. The same failure also cut engineers off from some of the internal systems needed to fix it, which turned an outage into a recovery trap.
Routing failures · BGP / Platform
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Telekom Malaysia Route Leak
High Severity Open record
A modern route leak involving Telekom Malaysia preserved the same old lesson in contemporary form: routing mistakes still escape local intent and become international reachability problems very quickly.
Routing failures · Route leak
- Internet Outage Atlas · Full Merged Research Report
Cloudflare BYOIP BGP Outage
Critical Open record
A large-scale Cloudflare routing incident tied to BYOIP handling showed how reachability can still disappear at internet scale when address announcement logic goes wrong at a major provider edge.
Routing failures · BGP / address announcement failure
- Internet Outage Atlas · Full Merged Research Report
When the service still exists but users cannot reach or trust it
Some failures do not knock servers offline. They break naming, authentication, or certificate trust, which is enough to make healthy systems feel dead.
Dyn DNS DDoS Attack
Critical Open record
Mirai used hacked cameras, routers, DVRs, and other junk devices to hammer Dyn's DNS infrastructure.
Twitter, Spotify, GitHub, Reddit, and much of the East Coast web started failing together. The point was not only the attack. It was how much of the visible web depended on one naming layer.
Naming, identity, and trust · DNS / DDoS
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Azure DNS Outage
High Severity Open record
A separate Azure DNS incident in 2020 reinforced that naming failures recur even inside large cloud platforms, and that those failures can outrank the health of the underlying services they point to.
Naming, identity, and trust · Cloud DNS
- Internet Outage Atlas · Full Merged Research Report
Sectigo AddTrust Root Expiration
High Severity Open record
The expiration of the AddTrust root certificate triggered trust failures on legacy systems and broke connections that still depended on that chain.
A quiet certificate deadline turned into a visible service problem for older clients.
Naming, identity, and trust · PKI expiration
- Internet Outage Atlas · Full Merged Research Report
Google Auth Outage Renders All Google Services Inaccessible
Critical Open record
A quota enforcement mistake during an auth migration knocked out Gmail, YouTube, Drive, and everything else tied to the same gate.
The apps were not the first problem. Access was.
Naming, identity, and trust · Authentication Failure
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Google Voice Expired TLS Certificate
High Severity Open record
An expired TLS certificate broke access to Google Voice and showed, again, that trust-chain maintenance is part of availability engineering rather than a side concern reserved for security teams.
Naming, identity, and trust · Certificate expiration
- Internet Outage Atlas · Full Merged Research Report
Azure DNS Outage
High Severity Open record
A DNS-layer problem inside Azure interrupted name resolution and widened into a broader cloud-service disruption.
Parts of the visibility and status path became unreliable too, which made diagnosis harder for customers already in the dark.
Naming, identity, and trust · Cloud DNS
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Azure AD Key-Rotation Outage
Critical Open record
A signing-key problem became a long authentication outage across Microsoft 365, Teams, Exchange Online, and related services.
Systems were still there, but access to them was blocked by the gate in front.
Naming, identity, and trust · Identity failure
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Let's Encrypt DST Root CA X3 Expiration
High Severity Open record
The expiration of DST Root CA X3 caused compatibility failures on older Android devices and legacy clients that still anchored trust there.
Modern infrastructure stayed up while part of the user base lost the ability to connect cleanly.
Naming, identity, and trust · PKI expiration
- Internet Outage Atlas · Full Merged Research Report
UAF DHCP Server Outage
Medium Severity Open record
A DHCP outage at the University of Alaska Fairbanks remains useful because it shows a bounded local-control failure that stayed local, which makes it a clean contrast against the much wider shared-layer incidents elsewhere in the atlas.
Naming, identity, and trust · Local network control
- Internet Outage Atlas · Full Merged Research Report
AWS DynamoDB DNS Failure
Critical Open record
DNS resolution for DynamoDB failed in us-east-1.
Disney+, Delta, Reddit, Robinhood, Roblox, and many other services went dark. The data was still there. The names stopped resolving. A naming failure overruled the resilience of the underlying system.
Naming, identity, and trust · Cloud / DNS
- Internet Outage Atlas · Full Merged Research Report
When unrelated services fail together
These incidents mattered because one shared provider or shared operational layer sat in front of many different products at once.
Salesforce Multi-hour Outage
High Severity Open record
A DNS-related failure affected Salesforce services and disrupted the large body of business workflows built on top of them.
The outage hit as an enterprise dependency problem, not just an app problem.
Shared platforms and front doors · SaaS DNS
- Internet Outage Atlas · Full Merged Research Report
Fastly Global Content Delivery Outage
Critical Open record
A dormant software bug sat in Fastly's network until a customer pushed a valid configuration change.
Within seconds, most of Fastly's global edge started returning errors. News sites, commerce platforms, and government pages disappeared together because the front door was more shared than it looked.
Shared platforms and front doors · CDN / Edge
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Akamai Prolexic Outage
High Severity Open record
A platform designed to preserve availability became the source of unavailability instead.
Customers depending on Prolexic lost service because the defensive layer itself failed under load.
Shared platforms and front doors · DDoS mitigation
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Akamai DNS Outage Silences FedEx, Airlines, and Major Banks
Critical Open record
A configuration update triggered a bug in Akamai Edge DNS and took down a long list of companies that looked unrelated until they failed at the same time.
Shared platforms and front doors · DNS / CDN
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Microsoft 365 / Azure Global Outage
Critical Open record
A global Microsoft 365 and Azure outage tied to Azure Front Door configuration reinforced the atlas theme that the front door often fails harder than the applications behind it.
Shared platforms and front doors · Azure Front Door configuration
- Internet Outage Atlas · Full Merged Research Report
Cloudflare Bot-management Outage
Critical Open record
An internal Cloudflare bot-management failure propagated widely because the protective layer itself sat in front of customer traffic at massive scale.
The case fits the broader pattern of defensive systems becoming shared failure systems.
Shared platforms and front doors · Protective edge logic failure
- Internet Outage Atlas · Full Merged Research Report
When recovery tools and platform internals start failing too
The hardest cloud outages are not just service failures. They are outages where the systems needed to understand or recover the outage are also under stress.
AWS EC2 Failure Exposes Limits of Availability Zone Isolation
Critical Open record
A network upgrade misrouted EBS traffic and took volumes offline in us-east-1.
Reddit, Quora, Foursquare, and other services built too tightly around one zone lost their cushion fast.
Cloud and control planes · Cloud Infrastructure
- Internet Outage Atlas · Full Merged Research Report
Azure Storage Outage
Critical Open record
Human error during a storage-system deployment led to a broad Azure outage and became one of the clearer early examples of control-plane mistakes causing large customer impact.
The trigger was routine. The spread was not.
Cloud and control planes · Cloud deploy failure
- Internet Outage Atlas · Full Merged Research Report
AWS S3 US-East-1 Outage
Critical Open record
A mistyped debugging command removed more S3 capacity than intended.
Thousands of applications went down with it, including systems people did not realize depended on that region so heavily. The outage became a lasting example of how one control-plane mistake can turn a local action into a public outage.
Cloud and control planes · Cloud / Human Error
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Google Cloud Networking Outage
Critical Open record
A routine change cascaded through Google Cloud's networking systems and led to major traffic loss and degraded access across services.
The incident showed how internal reliability changes can widen into public unavailability.
Cloud and control planes · Cloud networking
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
AWS US-East-1 Control-Plane Outage
Critical Open record
Internal networking and DNS issues in US-East-1 disrupted AWS services, Amazon devices, logistics systems, and third-party applications.
The region concentration problem was visible, but so was the depth of internal dependency inside the same region.
Cloud and control planes · Cloud DNS and control plane
- Internet Outage Atlas · Full Merged Research Report
GCP us-east4 Traffic Loss
High Severity Open record
Traffic loss in Google Cloud's us-east4 region highlighted how regional networking faults can still create large downstream application problems when many services quietly share the same cloud locality.
Cloud and control planes · Regional cloud networking
- Internet Outage Atlas · Full Merged Research Report
Microsoft 365 Outage
Critical Open record
A long Microsoft 365 outage highlighted how deeply office coordination, messaging, documents, and identity have been consolidated into one operational dependency for many organizations.
Cloud and control planes · Enterprise suite outage
- Internet Outage Atlas · Full Merged Research Report
When one release or one internal dependency spreads everywhere
These failures travel through shared software, internal coordination systems, or platform dependencies that turn one change into widespread operational loss.
Skype Supernode Failure
High Severity Open record
A software problem destabilized Skype's supernode layer and broke service for a huge share of users, showing how coordination nodes inside distributed platforms can still become central failure points.
Platform and software cascades · P2P platform failure
- Internet Outage Atlas · Full Merged Research Report
Google Global 5-minute Outage
High Severity Open record
A brief but iconic Google outage took major services offline at the same time and became a durable example of how concentrated platform ecosystems can vanish all at once, even during a short failure.
Platform and software cascades · Platform failure
- Internet Outage Atlas · Full Merged Research Report
Amazon.com Retail Outage
High Severity Open record
A high-profile Amazon retail outage showed how visible and immediate the impact becomes when a single commerce platform failure blocks browsing, purchasing, and order flow together.
Platform and software cascades · Commerce platform
- Internet Outage Atlas · Full Merged Research Report
NotPetya Global Outage
Critical Open record
NotPetya spread through a trusted software-update channel and crippled shipping, logistics, hospitals, and enterprise networks around the world.
It remains one of the clearest demonstrations of software supply chains acting like outage multipliers.
Platform and software cascades · Software supply chain
- Internet Outage Atlas · Full Merged Research Report
Zoom Partial Global Outage
High Severity Open record
Zoom experienced a broad service disruption during the period when remote work had made video infrastructure a daily dependency.
The incident showed how a platform that looks optional can become operationally central very quickly.
Platform and software cascades · Video platform
- Internet Outage Atlas · Full Merged Research Report
Slack File-storage Outage
High Severity Open record
A file-storage failure inside Slack disrupted access to uploads and working materials, showing how collaboration platforms break not only when messaging fails but also when their attached operational data stops moving.
Platform and software cascades · Storage exhaustion
- Internet Outage Atlas · Full Merged Research Report
Roblox 73-hour Outage
Critical Open record
Roblox went down for roughly three days after failures involving internal service-discovery and data systems compounded across a highly interconnected platform.
The length of the outage made the recovery-path problem impossible to ignore.
Platform and software cascades · Distributed systems failure
- Internet Outage Atlas · Full Merged Research Report
Slack Outage
High Severity Open record
Slack suffered a cascading failure involving database and cache systems, which disrupted messaging, connections, and workflow continuity for teams that depend on it as operating infrastructure.
Recovery was shaped by how many internal pieces were failing together.
Platform and software cascades · Collaboration platform
- Internet Outage Atlas · Full Merged Research Report
AT&T / T-Mobile / Verizon Roaming Outage
Critical Open record
A shared roaming dependency disrupted multiple major U.S.
carriers at once, making the outage notable less for any one brand than for the hidden third-party relationship that linked them together.
Platform and software cascades · Third-party roaming dependency
- Internet Outage Atlas · Full Merged Research Report
Meta (Facebook / Instagram) Outage
Critical Open record
A broad Meta outage affecting Facebook and Instagram showed that even without a long root-cause disclosure, the operational story remains the same: concentrated social platforms fail at the scale of their audience.
Platform and software cascades · Platform ecosystem outage
- Internet Outage Atlas · Full Merged Research Report
CrowdStrike Falcon Global Outage
Critical Open record
A routine CrowdStrike update shipped a bad configuration file and crashed Windows at kernel level on an estimated 8.5 million devices.
Airlines could not board passengers. Hospitals switched to paper. Banks shut down systems. Recovery was slow because every broken machine needed hands-on work.
Platform and software cascades · Software Supply Chain
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
GitHub Outage
High Severity Open record
A GitHub outage disrupted repository operations and development workflows at a layer many teams now treat as critical infrastructure rather than an optional collaboration tool.
Platform and software cascades · Developer platform outage
- Internet Outage Atlas · Full Merged Research Report
When geography and public systems stop being abstract
The cloud still depends on buildings, cables, power, carrier paths, and public-safety infrastructure that can fail in the same event.
Hurricane Katrina Telecom Failures
Critical Open record
Katrina destroyed fiber paths, towers, power, and fuel logistics together, collapsing multiple redundant systems at once.
It remains one of the clearest examples of geography overpowering abstract redundancy claims.
Physical and public infrastructure · Telecom infrastructure
- Internet Outage Atlas · Full Merged Research Report
The Planet Houston Outage
High Severity Open record
A power failure at The Planet's Houston datacenter exposed how fragile backup systems can be when they are tested under real pressure.
Thousands of hosted servers went dark and recovery stretched across days.
Physical and public infrastructure · Datacenter power
- Internet Outage Atlas · Full Merged Research Report
Level 3 Fiber Outage
High Severity Open record
A backbone fiber disruption at Level 3 highlighted how physical transport failures can still cascade into broad connectivity problems across downstream networks that rely on the same paths.
Physical and public infrastructure · Backbone fiber failure
- Internet Outage Atlas · Full Merged Research Report
Comcast Fiber Cut Outage
High Severity Open record
A large Comcast outage traced back to physical infrastructure damage and showed how ordinary cable-path failures can still produce wide consumer and enterprise impact.
The cloud did not make the fiber less real.
Physical and public infrastructure · Fiber cut
- Internet Outage Atlas · Full Merged Research Report
CenturyLink 911 Outage
Critical Open record
A network failure disrupted 911 service across multiple states and affected millions of customers.
The incident showed how emergency calling systems could still share failure domains with commercial backbone infrastructure.
Physical and public infrastructure · Public-safety outage
- Internet Outage Atlas · Full Merged Research Report
- The Days the Internet Died
Rogers Canada: 12 Million Without Service, Including 911
Critical Open record
A core network upgrade removed a critical filter and sent traffic into the wrong place at the wrong scale.
Rogers collapsed under the load, taking mobile service, internet access, and 911 with it for millions of people.
Physical and public infrastructure · Telecom Core Failure
- Internet Outage Atlas · Full Merged Research Report
Verizon Mobile Outage
Critical Open record
A major Verizon mobile outage underscored how quickly carrier failures still spill into daily public life once voice, data, authentication, and payment flows all assume cellular reachability.
Physical and public infrastructure · Mobile carrier outage
- Internet Outage Atlas · Full Merged Research Report
Verizon Mobile Outage
High Severity Open record
A later Verizon mobile outage, even with thinner public disclosure, remains useful in the atlas because it reinforces how dependent daily communications and service access remain on a small number of carrier systems.
Physical and public infrastructure · Mobile carrier outage
- Internet Outage Atlas · Full Merged Research Report
AWS Middle East Drone-strike Outage
Critical Open record
This incident is preserved because it forces the cloud back into physical reality: regional availability ultimately depends on facilities, geography, power, and security on the ground.
Physical and public infrastructure · Physical attack on cloud infrastructure
- Internet Outage Atlas · Full Merged Research Report