Prior to Hurricane Irma, Rick Vanover of backup and replication software firm Veeam provided recommendations for companies that were in the path of Irma prior to the storm’s arrival. Vanover told Data Center Knowledge that businesses should follow the 3-2-1 rule: there should be at least three copies of all data, on two different media (such as disk and cloud), and at least one copy stored remotely (such as cloud). In terms of what qualifies as remote, Vanover said that it should be a distance of at least 100 miles.
Hurricane Irma, like Harvey and Maria, had a devastating human toll, from loss of life to destruction of family homes. However, it also did horrible damage to companies, government agencies, and nonprofits. As business responds to these massively destructive events, the vital role of cloud computing for disaster recovery is becoming more evident.
Vanover’s advice was helpful as steps prior to a hurricane. What else can we learn now about disaster recovery and cloud’s potential role in facilitating it?
A Look At Disaster Recovery
Disasters can strike any organization, and the ability of the company to stabilize and stay solvent will depend in large part on its preparedness – as outlined in its business continuity and disaster recovery (DR) plans. To be truly “ready for anything” is to be prepared for any type of disaster, whether environmental or human-made, when you are using a technology to run your applications, especially if those environments are mission-critical.
Despite this need, as suggested by the general positive of stability and costs of downtime, businesses will often hesitate to invest in a carefully strategized DR plan because it can feel extraneous to the immediate needs of the company. When the Disaster Recovery Preparedness (DRP) Council (a partnership of government, academia, and the IT industry) studied DR preparation, the results were really kind of shocking. The benchmark survey graded organizations on a typical A to F scale. Nearly three-quarters of poll respondents, 72%, scored a D or an F! In other words, at that point (2013), most companies were failing at DR.
DR plans must move forward, though – they really are not optional. They are, in large part, an effort to avoid or limit downtime. To understand the importance of downtime, we can look at how much it costs. A 2013 report from Jason Verge looked at data center downtime. The Ponemon Institute researchers assessed 67 data centers that served a variety of industries. Each one was at least 2500 square feet. The findings of the study were that a minute of downtime costs $7900, on average. This figure is trending upward. The average cost of a minute of downtime in 2010 was $5600 – so the 2013 numbers indicated a 41% rise during that period.
Once you multiply that by the average amount of an incident of downtime, you really see how expensive it is to be “out of service.” The average length of an incident of unplanned downtime was 82 minutes. That means the average cost of downtime is $690,200. If an entire data center went down, the average recovery time was 119 minutes – translating to a total expense of nearly a million dollars, $901,500. The average length of a partial data center outage was significantly shorter at 56 minutes, which comes out to about $350,400 on average.
Disasters are horrific in their cost. Are they a real threat, though? The answer to that is a strong YES – especially in wake of Hurricanes Irma, Harvey, and Maria. It is not just about hurricanes, and it’s a long-established reality that “disasters happen.” In fact, the vast majority of poll respondents to the 2013 poll said that they had experienced unplanned downtime of their data center at least once in the past two years – 91%.
How Did Cloud Protect Businesses During Irma?
Thinking of cloud as a safeguard rather than a vulnerability requires a change in perspective from the cloud skepticism that continues to some degree. A survey of 300 IT pros, featured by David Linthicum, found that 57% of them believe the cloud to be secure. IT leaders and security directors were polled. A whopping 78% of the IT decision-makers said that they believe cloud was generally secure. Talking about security is way of talking about how solid the system is – how well backed up it is and how many protections are in place to keep it operational if you are under attack in any way (even if from a hurricane).
To understand how cloud computing served an important protective function for businesses during Hurricane Irma (and how it could be valuable in any disaster), it is helpful to simply look at how the cloud model is well-suited for DR situations. There are several reasons given in favor of cloud computing for DR by DisasterRecovery.org. Examples are that it cuts costs, is highly scalable, and is optimally efficient (since it gets rid of self-owned and self-maintained equipment, as well as taking advantage of economies of scale by using a larger data center).
All of those strengths of cloud could be considered positive side-effects if you are considering it for your DR strategy. The main positive, though, is it is simply in a remote geographical location. Use of cloud adds redundancy to any on-site computing system simply by creating another physical location in which data can reside and be processed. That remoteness, in and of itself, is a protection from any type of disaster than might hit your physical location.
That way, whether the data center is partially or completely destroyed, data can be recovered and production can begin again quickly. DisasterRecovery.org noted the danger of backing up too close to home — since some natural disasters can have a broad impact to VM partners in the same area. “[I]f there is an off-site production center and if it is geographically in the vicinity,” said the site, “it too may be affected.” Cloud computing resolves these issues simply by being at a distance.
There is another advantage of cloud other than location, that also goes beyond efficiency and cost. Cloud is engineered with disaster recovery as a central concern. This technology uses a distributed architecture, spreading the resources it uses to run your site and applications out across hundreds or thousands of different machines – all working together to form a single cloud server. In this way, cloud systems are fundamentally designed to be optimized with multiple redundancies – drawing resources from many different servers. This diversity is powerful, particularly if it is spread across data centers in multiple geographical locations.
Data Lessons From Hurricane Irma
Is there a key lesson from Irma about disaster preparedness and recovery? Yes, and it is two-fold: 1.) do it; and, 2.) use the cloud. Cloud is not just at a distance. It is an organization (if it is credible) that is centered on stability as a primary focus.
When you need hurricane-grade protection, trust a cloud built by long-standing infrastructure experts. At Atlantic.Net, they meet standards of the American Institute of CPAs’ Statement on Standards for Attestation Engagements No. 16, or SSAE 16 (as indicated by an independent audit). The cloud is based on world-class hosting infrastructure, with RAID-10 SSD storage for each machine. Create your cloud server.
Image from https://www.iland.com/solutions/disaster-recovery/