VMWare environment backup with vSphere Data Protection to Cold storage
-
FreeNAS and ZFS are not things that you want. Avoid both of those. Building your own file server / NAS is good, but FreeNAS should never be considered and ZFS only by storage engineers and only 1% of the time, it is very special use case.
Why FreeNAS and other NAS OS Don't Make Sense
The Cult of ZFS -
Cold storage normally means tape or other offline storage type. How are you using the term here?
-
Hi @scottalanmiller , Thanks for your reply,
I'm not planning to replicate the backups over network or internet to another VDP appliance, my daily backup amount is huge to be transferred over internet, even taking the full day to transfer on full bandwidth of 100Mbps it won't finish uploading.
As i have a secure store (physical storage place far from main office) i'm planning to send and retrieve hard disks physically on weekly basis, what i was thinking so far as follows, but not sure if it is the best way:- on the QNAP NAS have two configuration of RAID5 and hosting iSCSI data store, connected as BackupStore01, BackupStore02
- each store to have a copy of two VDP appliance, VDP01, VDP02
- once i bring the RAID5 disks set and attach it on the QNAP i will switch on the VDP appliance, and let it take a snapshots of the VMs with length of data up to 1 week, at the same time i will send the other RAID5 disks set offsite.
but the issue with this setup that the first 2days of the week will be a long single backup to cover 1 week of incremental data,
-
@Mohammed-Fota said:
- on the QNAP NAS have two configuration of RAID5 and hosting iSCSI data store, connected as BackupStore01, BackupStore02
- each store to have a copy of two VDP appliance, VDP01, VDP02
- once i bring the RAID5 disks set and attach it on the QNAP i will switch on the VDP appliance, and let it take a snapshots of the VMs with length of data up to 1 week, at the same time i will send the other RAID5 disks set offsite.
RAID 5? are those on SSDs?
-
@Mohammed-Fota said:
- on the QNAP NAS have two configuration of RAID5 and hosting iSCSI data store, connected as BackupStore01, BackupStore02
Technically, since you're using iSCSI that is not NAS, it's SAN. Why are you using iSCSI instead of CIFS/SMB or NFS?
-
You have two VDP appliances in the same location? How come?
-
I would not be happy about moving a RAID 5 set of drives around every week. Will it work? Sure probably for a while, but man that's a lot of stress on those drives? I'd move up to a RAID 10 or at least RAID 6.
have you considered a tape solution instead? Does VDP support tape?
-
RAID sets are not to be ever moved. Ever. If someone suggests this, you have problems. RAID is not meant to be used that way, ever. While technically on paper you might make it sound almost rational, in real life this is totally unacceptable. This would be a "once in an array lifetime" event because the chassis died. You would never do this intentionally. This is a misunderstanding of the goals and physical implementations of RAID systems.
-
@Dashrender said:
Technically, since you're using iSCSI that is not NAS, it's SAN. Why are you using iSCSI instead of CIFS/SMB or NFS?
I think I wrongly mentioned the product name, it's QNAP TurboNAS, https://www.qnap.com/i/en/product/model.php?II=203, Will attach with iSCSI instead as it have a little bit higher performance than CIFS/NFS with vSphere.
-
@Dashrender siad:
RAID 5? are those on SSDs?
it is going to be standard 4TB 3.5" SATA, which should be fast enough to receive the daily backup, and not that much expensive.
@Dashrender said
You have two VDP appliances in the same location? How come?
as i'm trying to create a duplicate of the backup disks, so each week one set will be offsite whilst the other attached and getting the backup, if i use one VDP, i can attach the disks only on the first setup time of the VDP appliance, i can attach existing disks, but i can not detach, the only way is by creating two VDP appliances, each is stored on separate RAID set.
@Dashrender said:
I would not be happy about moving a RAID 5 set of drives around every week. Will it work? Sure probably for a while, but man that's a lot of stress on those drives? I'd move up to a RAID 10 or at least RAID 6.
have you considered a tape solution instead? Does VDP support tape?i think considering RAID6 would be good for two disk failure fault-tolerance, no VDP does not support tape drives, http://blogs.vmware.com/vsphere/2014/02/vdp-backup-to-tape.html , and if it does, +8TB of tape storage is very big expense ,
@scottalanmiller said:
RAID sets are not to be ever moved. Ever. If someone suggests this, you have problems. RAID is not meant to be used that way, ever. While technically on paper you might make it sound almost rational, in real life this is totally unacceptable. This would be a "once in an array lifetime" event because the chassis died. You would never do this intentionally. This is a misunderstanding of the goals and physical implementations of RAID systems.
I think it does make sense to recalculate the overall time and money cost and do something about using offsite over internet replication instead, and push harder on the my management to agree with what is required.
-
tapes are designed to be moved/carried around. Drive enclosures aren't. It's one thing for a single drive USB setup to be in a technician's bag - backed up in case of failure, but a whole multi drive enclosure, You're talking about using it way outside it's intended use.
The potential for damage is high.you create 8+ TB of data every week? someone creating and caring about that amount of data I would expect to have the larger budgets required to do this in a more appropriate way.
-
I highly recommend you talk to your TAM that is taking care of your account to suggest a proper offsite backup solution. Using VDP, you could try architecting an offsite solution to look something like this:
Just a thought as VDP has it's limitations.
-
@Mohammed-Fota said:
@Dashrender said:
Technically, since you're using iSCSI that is not NAS, it's SAN. Why are you using iSCSI instead of CIFS/SMB or NFS?
I think I wrongly mentioned the product name, it's QNAP TurboNAS, https://www.qnap.com/i/en/product/model.php?II=203, Will attach with iSCSI instead as it have a little bit higher performance than CIFS/NFS with vSphere.
Unless you are really, really bottlenecked there, I generally recommend against iSCSI in a situation like this. NFS is normally best. If this is purely backups, iSCSI is certainly an okay option better than in most situations, but I still use NFS when possible.
-
@Mohammed-Fota said:
@Dashrender said:
Technically, since you're using iSCSI that is not NAS, it's SAN. Why are you using iSCSI instead of CIFS/SMB or NFS?
I think I wrongly mentioned the product name, it's QNAP TurboNAS, https://www.qnap.com/i/en/product/model.php?II=203, Will attach with iSCSI instead as it have a little bit higher performance than CIFS/NFS with vSphere.
When iSCSI is enabled, that is still a SAN, not a NAS, however, regardless of its product brand name.
-
@Mohammed-Fota said:
@Dashrender siad:
RAID 5? are those on SSDs?
it is going to be standard 4TB 3.5" SATA, which should be fast enough to receive the daily backup, and not that much expensive.
RAID 5 isn't okay to use then. RAID 10 only if there are four drives and RAID 6 can be considered if you have five or more drives. RAID 5 can never be considered unless it is pure SSD array.
-
@Mohammed-Fota said:
i think considering RAID6 would be good for two disk failure fault-tolerance, no VDP does not support tape drives, http://blogs.vmware.com/vsphere/2014/02/vdp-backup-to-tape.html , and if it does, +8TB of tape storage is very big expense ,
Tape is expensive but once you go to cold storage, it generally gets very cheap, very quickly.
-
@Mohammed-Fota said:
i think considering RAID6 would be good for two disk failure fault-tolerance
Number of disk failures is never a consideration with RAID. That's a myth and very important to avoid thinking that way. RAID 6 and RAID 10 are your options here, RAID 10 is the faster and more reliable, RAID 6 gives you more capacity.
-
@Mohammed-Fota said:
I think it does make sense to recalculate the overall time and money cost and do something about using offsite over internet replication instead, and push harder on the my management to agree with what is required.
That is one option, and well worth considering. You can also look at other mediums like single large drives (10TB+) or tape or something, but moving RAID sets is not an option and should not be considered under any circumstance. Simple don't consider it as an option, just like RAID 5 should never be brought up as an option. The RAID sets will die frequently, damage the equipment and have very high risk of data loss. The cost will skyrocket compared to the projections. If you think tape is expensive, this will be far worse.
-
Thank you all for the valuable details, was never considered before, i will work on the backup concept including replication to another site, Kind regards.