Safe to have a 48TB Windows volume?
-
@jim9500 Honestly, that seems like a lot of wasted drives. I don't recall a monitored RAID10 ever failing, and no amount of spare drives in a RAID array can take the place of monitoring the thing.
The triple mirror means that you will have increased read speed. If you don't need the increased read speed, then that's just a waste of drives.
1 hot spare for the entire array? I'm guessing you just ran out of drive bays to make another mirror happen.
Yes, @scottalanmiller does talk about that a lot, but just because he talks about it a lot doesn't mean he recommends it for everyone or all situations. Blind swap is what you gain with the hardware RAID, and is important to make it easy to know which drive to replace.
I've used volumes up to 48TB raw/24TB usable. File systems were never an issue, but ZFS is a lot more than just a filesystem.
ZFS is one of the last disk management and filesystem combination I'd recommend to most people. LVM + XFS/ext4/3/2 is what most people using Linux should be using.
-
NTFS has improved a lot over the years. This is definitely a big volume for NTFS to handle. ZFS is better designed for volumes of this size.
You are correct, with your triple mirrored (and hot spare!) setup, it's your filesystem, not your array, that you have to worry about. You have definitely managed to shift the risk from the RAID to the FS.
This isn't insanely big, but certainly having Windows managing storage always gives me a little moment of pause. Storage is not their strong suit and has weakened, rather than improving, in recent years. ReFS has had issues, the recent releases have had their own issues even with NTFS, and their software RAID has had big time issues (you aren't using that here, so not applicable either.) But this is just generally an area that Microsoft struggles with and doesn't tend to see as critical so seems to mostly poo-poo reliability concerns to focus on other areas.
If I was doing storage this large, I would almost certainly be using XFS on hardware RAID based on your setup. XFS is faster than NTFS, and pretty much bullet proof.
-
@travisdh1 said in Safe to have a 48TB Windows volume?:
1 hot spare for the entire array? I'm guessing you just ran out of drive bays to make another mirror happen.
Different use case. More mirrors is for more capacity (or speed). The single spare is to stop someone from having to drive into the datacenter to replace a drive in the middle of the night when one fails. With 33 drives in the array, there will be failures far more often than in a normal two to four drive server.
Imagine a data center with sixteen servers in it. You'd expect drive failures from time to time and it would be annoying. This one server has that many drives in it all in one system, so the overhead of replacing drives is real.
-
@jim9500 said in Safe to have a 48TB Windows volume?:
It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.
Yes, you can certainly do that. MD RAID will do what you want. But you have the enterprise hardware RAID already, so likely you want to keep using it. You can go to Linux and XFS without changing your RAID in any way.
-
Doesn't ntfs have a limit of 16TB per volume?
-
At a previous company I was over the dept I'm doing IT for now. I was told "we couldn't afford a backup" 'cuz I kept yelling at them about it (it was raid 6 across 25 1TB SATA drives in a fly by night company SuperMicro type box + 1 spare).
At one point IT did an array expansion adding drives & were unfamiliar with the array card. It corrupted our data (hundreds of thousands of files randomly re-assigned to different folders, 10s of thousands of corrupted files etc) (HP supports live expansion but this array controller did not). We were down for months, the fallout for not finding everything followed us for years. It almost destroyed us.
I am fine being down for a full week - or two weeks if I have to restore (haven't in 5 years). But offsite backup = insurance policy. I don't trust array controllers or a single server setup any more than I do a hard drive.
-
@DustinB3403 said in Safe to have a 48TB Windows volume?:
Doesn't ntfs have a limit of 16TB per volume?
it depends on the cluster size. I just created a 30TB volume the other day.
-
That's not really a fair example, though. While it is good not to trust things, an induced failure (that's how expansion works) is not a good way to judge reliability. That's like driving into a sign, but then not trusting the autonomous steering. There is good reason to not trust the robot driver, but you can't distrust it based on driving into the sign yourself
-
@jim9500 backing up a full 48tb to b2 would be something like $247/month
-
@DustinB3403 said in Safe to have a 48TB Windows volume?:
Doesn't ntfs have a limit of 16TB per volume?
NTFS volume limit is 256TB in older systems.
NTFS has an 8PB volume limit in modern ones.
-
@DustinB3403 said in Safe to have a 48TB Windows volume?:
@jim9500 backing up a full 48tb to b2 would be something like $247/month
But with fees if you want to retrieve it. Just be aware that that can be pretty large.
-
@travisdh1 said in Safe to have a 48TB Windows volume?:
The triple mirror means that you will have increased read speed. If you don't need the increased read speed, then that's just a waste of drives.
It does (sortof) decrease my risk as I would need 3 drives out of any set of 3 to fail. I understand this looks like overkill. It also helps on read speed. Prior to this array I was using 36 600GB 15K SCSI. My goal was similar speed + safer setup + bigger volume. The difference in cost between using raid 10 & 10 ADM using 3TB drives is only about $2,000.
@scottalanmiller said in Safe to have a 48TB Windows volume?:
But it is darn close when using triple mirroring!
FWIW - you're the reason I migrated to Raid 10 off of my Raid 6 / 36 drive setup. Lots of yelling at me on Spiceworks a few years ago about how raid 6 isn't safe for huge arrays
-
@jim9500 it decreases the risk by a lot. You might already have risk so low that you don't care, but it certainly decreases it a lot more.
-
This post is deleted! -
@scottalanmiller said in Safe to have a 48TB Windows volume?:
You can go to Linux and XFS without changing your RAID in any way.
Ah perfect. So I wouldn't need to move to software raid to move away from NTFS. I'm not convinced I need to yet. But if after more research I find out I do - Is it likely I'm going to run into issues using something like SAMBA + XFS as a windows shop network share?
-
@jim9500 said in Safe to have a 48TB Windows volume?:
@scottalanmiller said in Safe to have a 48TB Windows volume?:
You can go to Linux and XFS without changing your RAID in any way.
Ah perfect. So I wouldn't need to move to software raid to move away from NTFS. I'm not convinced I need to yet. But if after more research I find out I do - Is it likely I'm going to run into issues using something like SAMBA + XFS as a windows shop network share?
Definitely not. You can use hardware RAID anytime. There are no cases where you can't use hardware RAID.
Well there are, but those are cases where you use hardware that doesn't provide hardware RAID.
-
XFS will present no issues to you as a Windows show. Samba can be a pain to manage, but keep in mind that most NAS products use Samba to talk to Windows. So it is pretty solid when set up correctly.
-
@scottalanmiller said in Safe to have a 48TB Windows volume?:
There are no cases where you can't use hardware RAID.
Yea - for some reason I was thinking I would need to use ZFS. I'd prefer to stick to the enterprise hardware as it's caused 0 issues for me.
-
@jim9500 said in Safe to have a 48TB Windows volume?:
@scottalanmiller said in Safe to have a 48TB Windows volume?:
There are no cases where you can't use hardware RAID.
Yea - for some reason I was thinking I would need to use ZFS. I'd prefer to stick to the enterprise hardware as it's caused 0 issues for me.
Even if you used ZFS, you can use hardware RAID. By definition, all file systems must work the same on hardware RAID as they do on bare metal drives. If they didn't, it means that the hardware RAID isn't working.
-
@jim9500 said in Safe to have a 48TB Windows volume?:
It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.
@Donahue - Yes. I have a similar setup offsite backup several miles away for disaster recovery / hardware failure etc. I know raid != backups.
What's the air-gap to protect against an encryption event if any?