Put your data management knowledge to practice
Let’s use the concepts from my last post to expand on why each of these data management solutions can play an important role in ensuring the safety and availability of your organization’s data. Now that you know what some of these concepts actually mean, let’s talk about implementation a bit more.
First, let’s start with replication. This is one of the most important data management functions because having a disaster recovery site that has the data ready to turn on is probably the most visible type of protection to users and upper management. Often, people don’t use or test their replicated sites enough. Don’t fall victim to this oversight. Replicated sites should be tested often, and your organization should have clearly written policies on how to test them so multiple personnel have the ability to do so (in case your system administrator happens to be taking a personal day). One of the side advantages to testing replicated sites is that you can protect yourself during hardware maintenance, patching and upgrades. The main two reasons organizations don’t test: fear of failure and WAN bandwidth problem/exposure. Lucky for you, SwishData can help with those problems.
Call the DR
Now, let’s talk about DR copies. If you don’t use NetApp Syncsort Integrated Backup (NSB) or source-side deduplication (though you should), backing up directly from the replicated copy saves time, bandwidth and cost. The other choice is to do a standard backup, and then create tapes for offsite or replicate that standard backup to the DR site. Trucking tapes is not the best option, but it has been the standard answer to DR copies for decades — there’s gold in those mountains of tapes! Well, not for your organization, but for vendor companies that sell and store tapes.
Replicating backup copies made at a primary site to a DR site not only seems redundant… it is. Replicate once, and then make your DR copies a backup of that replication.
Back, back, back it up
So where does that leave normal backups at the primary site? Is having a local backup copy redundant if you have local snapshots and DR copies? Not necessarily, because the one danger here is that if you did have a primary storage array total outage, you’d have to bring up your DR site and run from there while rebuilding your primary site from data — over the same WAN you’re trying to run primary data on. It can work, and it won’t cause an outage, but it isn’t ideal. Best practices point to having a local backup copy and a DR copy. The big problem is — now you need four copies of your data, when you could reduce that by 25 percent by simply accepting the one remote chance that your admins will spend extra time recovering local data over the WAN while running from the DR site. If you have SwishData’s WAN maid service, it’s doable, but your organization has to decide if the savings are worth it.
Push the easy button
Snapshots are the most important: a snapshot is the most useful copy — if implemented properly, 80 percent or more of restores will come from the snapshot copy — and it’s the lowest impact an organization can have in protecting its data.
Snapshots can significantly impact the local backup copy. The local backup copy could be eliminated completely, saving a quarter of your organization’s data storage costs. If it’s not eliminated, the best way to create a local backup copy is from the snapshot copy, not from the primary copy. This means that you remove the backup window completely. That’s right — remove the backup window completely. No more sleepless nights trying to fit in that ever-shrinking backup window into a 24-hour day.
Get in touch with me now to see how this post, and my post where I introduced these concepts, are relevant to your data center design. A SwishData architect will be happy to meet you for a whiteboard session to make it all very clear.