What We Learned from Hurricane Sandy about Disaster Preparedness – Part 2 of 3: What Went Wrong

Most responsible corporations have well thought out Disaster Recovery/Business Continuity plans. Unfortunately DR/BC plans rely heavily on availability of transportation and power being available somewhere…, not to mention food and fuel.

With Sandy, multiple interdependent failure vectors completely foiled or at least drastically delayed plans to return to even the most basic level of business functionality.

I’m not going to rehash the stories of loss of life, property and basic necessities of life in New Jersey, New York and surrounding areas. The destruction saddens us all. Instead, let’s try to move forward and take a hard look at how we can do things differently next time.

In my previous post, we explored the loss of “the big three” – power, communications and transportation Let’s take a closer look at how those events made some Disaster Recovery/Business Continuity plans utterly unusable, from a cause-and effect perspective.

 Widespread and Long-duration Power Outage

o  With internal corporate phone systems and data systems down Customer Service, the most basic necessity in order for a company to continue to do business, was unreachable. Customers wishing to place orders may have had no choice but to seek other providers.
o Hosted datacenters were, for the most part, operational throughout the event, but the data systems were inaccessible because the endpoints (corporate offices) were down.

 Transportation Issues

o  Curfews and blocked or washed out roads hampered individual travel. Public transportation was basically nonexistent. Workers were unable to travel to temporary work sites.

o  Repair crews were, and continue to be, delayed in their efforts by impassable roads and hazardous conditions; outages continue.

 Worker Issues

o  Power and heat were unavailable in many areas. Fuel was in short supply. Food stocked at home was consumed and there was no way to restock. Few expected to be unable to buy food for such an extended period. These became the primary concerns of workers so affected, and quite rightly so.

So, what did we learn so far? Disaster Recovery is not sufficient. What is sorely needed is a Disaster Preparedness plan. Recovery Plans are, by nature and strict definition, reactive plans. Without proactive planning, corporations will find themselves at a significant disadvantage, again, when another event of this magnitude occurs. And chances are very good that it will.

In Part III of this series we’ll get to real-world steps that can be taken right now to at least partially mitigate a full-blown disaster before it strikes.

On to Part 3…


Terminal server has exceeded the maximum number of allowed connections

Have you ever gotten this error?

“The terminal server has exceeded the maximum number of allowed connections. The system can not log you on. The system has reached its licensed logon limit. Please try again later.” Read more

What We Learned from Hurricane Sandy about Disaster Preparedness – Part 1 of 3: The Problem

In this three part series, I will be reviewing some of the more significant lessons we learned from this particularly devastating event.

In the days leading up to landfall, several factors worked together to disrupt both large and small businesses. Most significant of these was human nature – our natural tendency toward optimistic expectations; ‘The storm may not turn west’, ‘We’ve lived through hurricanes before’ and ‘We have a business continuity plan, so we should be fine’.

Then reality set in. The sheer scope of destruction caused by hurricane Sandy had not been seen before by most people in our area.

First, Sandy caused much more damage than most people were expecting. Damage was both widespread and long-duration. Published power restoration estimates were fuzzy and difficult to read. Actual time-to-repair was difficult to predict.

Second, we lost the Big Three… power, communications and transportation. Losing all three of these concurrently was completely unexpected to most and caused an infrastructure cascade effect. Loss of each hampered the remediation of the other two. Power and communications could not be quickly restored due to transportation issues such as blocked roads and fuel shortages. Road-clearing efforts were hampered by communications issues due to downed lines and failing backup batteries in some cell towers.

Business continuity plans, for the most part, were designed around predictable scenarios of single-system failure, such as power, or strongly focused on things like server recovery, alternate telco providers and datacenters.

Failure to integrate the possibility of multiple interrelated and mutually interdependent failure points into business continuity plans left many mid-sized businesses unable to operate at even a minimal level.

In my next post, we’ll explore the business impact and what went wrong.

On to Part 2…

This article is by VelocIT, a fully managed service provider offering outstanding IT support and managed services to growing businesses.