Replication of data across data centers is an accepted “best practice” to protect files from damage, natural disasters and other risks. Storage vendors understand this quite well and over the years have developed proprietary languages, called protocols, whereby their storage arrays can talk to each other and replicate the data between them. Buyers have come to expect this and take it for granted that these protocols will function properly, and they do. Note that each vendor has their own protocol and it is specific to their storage array.
Each vendor’s protocol may produce the desired result, properly replicated data. As buyers, we rarely question the vendors as to the efficiency of their protocol. Replication requires telecommunication/network links and these are expensive, especially in the case where replication requires long distance satellite links. Occasionally vendors are put to the test in a head-to-head Proof of Concept, PoC, and I was fortunate to meet a systems engineer that had recently concluded a PoC.
This PoC involve replication of data from a data center in Europe to one in North America for a large financial institution. The buyer had prepared a set of benchmarks regarding the amount of data and within what time period it required replication to take place. Given these parameters two vendors were put to the test.
In order to meet the objectives, Vendor A required 16 satellite links to replicate the data in the time allowed. Vendor B was able to meet the data volume and time objective with only three (3) [This is not a typo.] satellite links. Imagine the savings or the expense of 12 additional satellite links! This reduction in telecommunication (operating) expenses more than outweighs any difference in up front capital expenses for the storage arrays. What is most impressive is that the buyer took the time, effort and expense to conduct the PoC on a topic rarely studied—protocol efficiency. And for that effort they will enjoy many months of reduced telecommunication expenses.