Facebook’s evil empire went down this week – here’s why
Facebook goofed in a big way.
While the conspiracy theorists had all kinds of wacky reasons as to why web browsers forgot that Facebook existed, the real reason was far more benign – a misconfigured update of the company’s data center routers.
That’s it, a routine configuration update on the backbone routers that send information between Facebook’s data centers took the whole thing offline. It wasn’t just the social media sites that were affected either.
Employees, contractors, and the engineers that were supposed to be fixing the issue couldn’t log into their work accounts. They also lost access to internal tools for tracking issues, internal chat, the ability to remote into some of the data centers, and even lost the ability to enter buildings, as the authentication for smart badges wasn’t working.
The outage even affected non-Facebook companies, like AdGuard whose DNS services got overloaded trying to resolve requests to access Facebook and its other domains. They scrambled to create a fix, and an hour later had mitigations in place that reduced their own server load.
Sorry conspiracy nuts, it wasn’t Facebook trying to dodge the repercussions of that 60 Minutes whistleblowing documentary. Nor was it payback, or the universe creating some instant karma. As with most internet issues, it’s usually (always?) DNS.
- 5 of the scariest takeaways from the Facebook whistleblower interview with 60 Minutes
- Celebrities on Instagram make people feel like shit according to a new study
- Microsoft CEO says the failed TikTok deal was the ‘strangest thing I’ve ever worked on’
- Facebook swears the good outweighs the bad on Instagram when it comes to mental health