RAID 6 in my backup VM host on spinning rust?
-
@dashrender Based on what I've seen re:IOPS usage of my current VMs I believe it will.
I guess my other concern with RAID 6 in this case is if the array is getting too big? The individual disks themselves at 600GB I don't think are a problem, but 12 in a single array?
-
@beta Is it really that close of a margin that you think you need to shift to RAID6 for capacity reasons?
I would think the performance drop might be a larger concern, but being that this isn't your primary backup though I also would consider using RAID6 to have additional room to grow (if required).
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
Hear me out...I have a Dell server that I use as a Veeam replication target. This host is used as a backup in case my primary server dies - I just turn on the replicas and run from it until primary host is repaired.
This is not a backup. This is a replica. These are completely different things.
Do you actually have a backup?
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
Would this be crazy to do? Or should I just stick to OBR10? Thanks!
Yes, don't do it. This is not a backup it is a replica. That means when something fails and you need to use it, it needs to be performant.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
Hear me out...I have a Dell server that I use as a Veeam replication target. This host is used as a backup in case my primary server dies - I just turn on the replicas and run from it until primary host is repaired.
This backup host currently has OBR10 comprised of 10 600GB 10K SAS drives. I'm running up against storage capacity limitations and have ordered 2 additional 600GB disks to add to the array, but I was thinking while I am in the process of rebuilding this array, maybe I should change it from OBR10 to RAID 6? My concern is that while I am pretty sure the OBR10 will give me enough space to last until I schedule a complete replacement of the server, the margin will be very slim whereas the RAID 6 I'm sure will give me plenty of extra breathing room until the server is replaced.
Would this be crazy to do? Or should I just stick to OBR10? Thanks!
You know, you only have 10x600/2 = 3TB of storage.
You could replace the entire array with two 3.84TB SATA/SAS SSDs. Run them in RAID1 you'd have better performance and higher reliability.
Buying 2.5" hard drives today is a mistake.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
Hear me out...I have a Dell server that I use as a Veeam replication target. This host is used as a backup in case my primary server dies - I just turn on the replicas and run from it until primary host is repaired.
This backup host currently has OBR10 comprised of 10 600GB 10K SAS drives. I'm running up against storage capacity limitations and have ordered 2 additional 600GB disks to add to the array, but I was thinking while I am in the process of rebuilding this array, maybe I should change it from OBR10 to RAID 6? My concern is that while I am pretty sure the OBR10 will give me enough space to last until I schedule a complete replacement of the server, the margin will be very slim whereas the RAID 6 I'm sure will give me plenty of extra breathing room until the server is replaced.
Would this be crazy to do? Or should I just stick to OBR10? Thanks!
Prior to implementing all-flash on SATA SSDs we'd run with eight to sixteen 10K SAS spindles in RAID 6.
Those arrays were running anywhere from four to ten virtual machines. There would be a DC, Exchange, Remote Desktop Services usually in farm mode, SQL, and a series of LoBs.
The largest RAID 6 rust array was sixteen spindles.
350 IOPS x 8 = 2,800 or 5,600 for 16 spindles. Ugh, can you believe it?
Mean throughput for 2.5" SAS drives was about 150MB/Second per drive for older less dense platters and about 250MB/Second for newer high areal density drives.Since this is a replica server, the expectation would be that it would not be as performant as the main server so no real worries there.
Just so long as there isn't 100+ ms response times that is. That would become very painful very fast.
What version of Veeam?
A SOBR set up with a cloud layer with BackBlaze B2 and immutability is a huge step ahead in protecting an org from an outright blotto event or malware.
-
@pete-s How much are two 3.84TB enterprise SSDs going to cost me again?
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@pete-s How much are two 3.84TB enterprise SSDs going to cost me again?
Many people likely wouldn't spend on the enterprise SSD drives, unless you needed verified compatibility.
As we don't know what the hardware is, you could likely use generic Datacenter SSDs from Samsung etc and get a ballpark price.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@pete-s How much are two 3.84TB enterprise SSDs going to cost me again?
Looks like $725.00 for a decent drive to me. https://www.serversupply.com/SSD/SATA-6GBPS/3.8TB/SAMSUNG/MZ7LH3T8HMLT_315446.htm?gclid=CjwKCAjwvuGJBhB1EiwACU1AiWcjFlCLGhxapMZf0wY3H-uXjzH65a1XOGuDj-i7Lm9muZiR77rxUBoCAt8QAvD_BwE
-
@travisdh1 said in RAID 6 in my backup VM host on spinning rust?:
@beta said in RAID 6 in my backup VM host on spinning rust?:
@pete-s How much are two 3.84TB enterprise SSDs going to cost me again?
Looks like $725.00 for a decent drive to me. https://www.serversupply.com/SSD/SATA-6GBPS/3.8TB/SAMSUNG/MZ7LH3T8HMLT_315446.htm?gclid=CjwKCAjwvuGJBhB1EiwACU1AiWcjFlCLGhxapMZf0wY3H-uXjzH65a1XOGuDj-i7Lm9muZiR77rxUBoCAt8QAvD_BwE
You could even go below $500 ea. Which would put you under $1000 for the entire array.
There are several enterprise drives in that price segment.
For instance: https://www.newegg.com/micron-5300-max-3-84tb/p/1Z4-00CB-000G4 -
@dustinb3403 Well I looked up Dell drives and the 3.84 SATA read-intensive drives are going for ~$1800 a piece (before any discounting).
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@dustinb3403 Well I looked up Dell drives and the 3.84 SATA read-intensive drives are going for ~$1800 a piece (before any discounting).
Sounds about right. You pay 2x to 3x as much buying SSDs from Dell compared to the same drive from the manufacturer.
But you can get Dell drives from retailers as well for lower prices. Another option is to buy Dell refurbished drives with warranty.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@dustinb3403 Well I looked up Dell drives and the 3.84 SATA read-intensive drives are going for ~$1800 a piece (before any discounting).
Tier 1 hardware is a particular beast.
Yeah, we can scour the fleabays of the world for caddies, but then comes the firmware fun times and/or question marks.
Our saved search e-mail monitoring shows Tier 1 secondary channel sales as being very expensive even relative to buying direct from the Tier 1 vendor.
At this point, adding a couple of known good SAS spindles and calling it a day is probably going to be the safest bet without having to think about rebuilding that SOBR, or adding a new SOBR and migrating the backups, or just setting up an entirely new array/SOBR and backing up leaving things vulnerable for a while.
Nah, in my mind KISS applies here.
Add the drives, expand the array, and call it a day.
Oh, and make sure to set up a B2 bucket for some immutability.
-
@phlipelder said in RAID 6 in my backup VM host on spinning rust?:
Nah, in my mind KISS applies here.
Add the drives, expand the array, and call it a day.Might not be so simple. Not every perc controller / version can grow a RAID10 array.
I believe the entire array needs to be restriped when doing that.
-
@pete-s said in RAID 6 in my backup VM host on spinning rust?:
@phlipelder said in RAID 6 in my backup VM host on spinning rust?:
Nah, in my mind KISS applies here.
Add the drives, expand the array, and call it a day.Might not be so simple. Not every perc controller / version can grow a RAID10 array.
I believe the entire array needs to be restriped when doing that.
Yeah, brain skipped after a speed bump. ;0)
Verify the current PERC can indeed expand that array and do so keeping things as they are.
You could copy out the contents of the SOBR, blow away the array, add the drives, set up the RAID 6 array, format, copy the data back, and finally get Veeam set up and the backups imported but that kills KISS big time.
Mention of "replacing the server" is there so keep the time cost ($150/Hour minimum here) in mind for any changes to be made relative to the budget for the new rig in the not to distant future.
-
@phlipelder said in RAID 6 in my backup VM host on spinning rust?:
@pete-s said in RAID 6 in my backup VM host on spinning rust?:
@phlipelder said in RAID 6 in my backup VM host on spinning rust?:
Nah, in my mind KISS applies here.
Add the drives, expand the array, and call it a day.Might not be so simple. Not every perc controller / version can grow a RAID10 array.
I believe the entire array needs to be restriped when doing that.
Yeah, brain skipped after a speed bump. ;0)
Verify the current PERC can indeed expand that array and do so keeping things as they are.
You could copy out the contents of the SOBR, blow away the array, add the drives, set up the RAID 6 array, format, copy the data back, and finally get Veeam set up and the backups imported but that kills KISS big time.
Mention of "replacing the server" is there so keep the time cost ($150/Hour minimum here) in mind for any changes to be made relative to the budget for the new rig in the not to distant future.
That's why it faster to just put in two SSDs (we know the server has two bays free). And set up a new array.
You have all the time in the world since both the new and old array are up and running.
As a stop gap measure you could potentially do it with two smaller SSDs and keep both arrays in use.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@dashrender Based on what I've seen re:IOPS usage of my current VMs I believe it will.
I guess my other concern with RAID 6 in this case is if the array is getting too big? The individual disks themselves at 600GB I don't think are a problem, but 12 in a single array?
It's always array size, never drive size, that matters primarily. Of course, drive size influences array size, but the two are not exactly the same. Drive size does matter, but is far more trivial and affects a different aspect of the system. Array size is about the chances of failure, drive size influences the time to rebuild after a failure. Both matter, but the first matters way more.
That said 12x600 is 7.2TB which is quite large. Not so large as to be out of the question, but large enough to cause concern.
As a backup target, you have way more flexibility than with other types of storage. I would say that likely if you go with RAID 6 you need to be prepared that if you lose a drive that you will need to run out and buy an 8TB USB drive and copy everything off to it before attempting to replace a failed drive. If that's okay with you, then I'd say RAID 6 is okay assuming the performance remains adequate for your transfer windows.
-
@pete-s said in RAID 6 in my backup VM host on spinning rust?:
@beta said in RAID 6 in my backup VM host on spinning rust?:
Hear me out...I have a Dell server that I use as a Veeam replication target. This host is used as a backup in case my primary server dies - I just turn on the replicas and run from it until primary host is repaired.
This backup host currently has OBR10 comprised of 10 600GB 10K SAS drives. I'm running up against storage capacity limitations and have ordered 2 additional 600GB disks to add to the array, but I was thinking while I am in the process of rebuilding this array, maybe I should change it from OBR10 to RAID 6? My concern is that while I am pretty sure the OBR10 will give me enough space to last until I schedule a complete replacement of the server, the margin will be very slim whereas the RAID 6 I'm sure will give me plenty of extra breathing room until the server is replaced.
Would this be crazy to do? Or should I just stick to OBR10? Thanks!
You know, you only have 10x600/2 = 3TB of storage.
You could replace the entire array with two 3.84TB SATA/SAS SSDs. Run them in RAID1 you'd have better performance and higher reliability.
Buying 2.5" hard drives today is a mistake.
This, definitely. Investing in an ancient system with legacy style drives makes very little sense today.
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@dustinb3403 Well I looked up Dell drives and the 3.84 SATA read-intensive drives are going for ~$1800 a piece (before any discounting).
Move off of the Dell device. You can get into a Synology or something else with two bays for far less money than the price difference of the drives!
-
@beta said in RAID 6 in my backup VM host on spinning rust?:
@pete-s How much are two 3.84TB enterprise SSDs going to cost me again?
No need, not even recommended, to use enterprise drives. Backup use cases are not good for enterprise drives. Good consumer drives will give you much better value.