Safe to have a 48TB Windows volume?



  • I'm currently using an 34 HP 3TB drives with an array configured for Raid 10 ADM (every drive set = 3 redundant drives) + 1 spare. Drives are sitting in 3 d32600 DAS boxes connected to a P822 controller in an HP DL360 g8 running Windows Server 2012 with a single 30TB share. This volume has about 30 million files that are accessed daily for a period of 1 - 3 months by 25 people before the data is archived to offline storage. Server is a stand alone bare metal installation.

    It's important to be cheap, reliable & easy to manage as possible if I get hit by a bus. The only way I've found to do this is ride 1 - 2 generation old enterprise equipment (HP is what has worked well for me). Everything I've purchased is used (except for drives).

    I am happy with my Windows + HP situation & 48TB would be fine for the next 2 - 3 years. Unless there's a big risk I'm just not factoring in. When I was asking for advice before & moving from an inherited 18TB / Raid 6 setup on Spiceworks a few years ago several people mentioned a possible concern about large NTFS volumes. Have any of you used 48TB Windows volumes? Any resources on risk analysis vs ZFS?



  • is there a backup for all this data?



  • It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.

    @Donahue - Yes. I have a similar setup offsite backup several miles away for disaster recovery / hardware failure etc. I know raid != backups.



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    I'm currently using an 34 HP 3TB drives with an array configured for Raid 10 ADM (every drive set = 3 redundant drives) + 1 spare.

    Wow, that's a lot of protection. So your geometry is 3 x 11 + 1?



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    @Donahue - Yes. I have a similar setup offsite backup several miles away for disaster recovery / hardware failure etc. I know raid != backups.

    But it is darn close when using triple mirroring!

    Haha, not really, because it doesn't protect against someone deleting a file or something like that. But the reliability (durability) on the array has to be something like ten nines!



  • @jim9500 Honestly, that seems like a lot of wasted drives. I don't recall a monitored RAID10 ever failing, and no amount of spare drives in a RAID array can take the place of monitoring the thing.

    The triple mirror means that you will have increased read speed. If you don't need the increased read speed, then that's just a waste of drives.

    1 hot spare for the entire array? I'm guessing you just ran out of drive bays to make another mirror happen.

    Yes, @scottalanmiller does talk about that a lot, but just because he talks about it a lot doesn't mean he recommends it for everyone or all situations. Blind swap is what you gain with the hardware RAID, and is important to make it easy to know which drive to replace.

    I've used volumes up to 48TB raw/24TB usable. File systems were never an issue, but ZFS is a lot more than just a filesystem.

    ZFS is one of the last disk management and filesystem combination I'd recommend to most people. LVM + XFS/ext4/3/2 is what most people using Linux should be using.



  • NTFS has improved a lot over the years. This is definitely a big volume for NTFS to handle. ZFS is better designed for volumes of this size.

    You are correct, with your triple mirrored (and hot spare!) setup, it's your filesystem, not your array, that you have to worry about. You have definitely managed to shift the risk from the RAID to the FS.

    This isn't insanely big, but certainly having Windows managing storage always gives me a little moment of pause. Storage is not their strong suit and has weakened, rather than improving, in recent years. ReFS has had issues, the recent releases have had their own issues even with NTFS, and their software RAID has had big time issues (you aren't using that here, so not applicable either.) But this is just generally an area that Microsoft struggles with and doesn't tend to see as critical so seems to mostly poo-poo reliability concerns to focus on other areas.

    If I was doing storage this large, I would almost certainly be using XFS on hardware RAID based on your setup. XFS is faster than NTFS, and pretty much bullet proof.



  • @travisdh1 said in Safe to have a 48TB Windows volume?:

    1 hot spare for the entire array? I'm guessing you just ran out of drive bays to make another mirror happen.

    Different use case. More mirrors is for more capacity (or speed). The single spare is to stop someone from having to drive into the datacenter to replace a drive in the middle of the night when one fails. With 33 drives in the array, there will be failures far more often than in a normal two to four drive server.

    Imagine a data center with sixteen servers in it. You'd expect drive failures from time to time and it would be annoying. This one server has that many drives in it all in one system, so the overhead of replacing drives is real.



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.

    Yes, you can certainly do that. MD RAID will do what you want. But you have the enterprise hardware RAID already, so likely you want to keep using it. You can go to Linux and XFS without changing your RAID in any way.



  • Doesn't ntfs have a limit of 16TB per volume?



  • At a previous company I was over the dept I'm doing IT for now. I was told "we couldn't afford a backup" 'cuz I kept yelling at them about it (it was raid 6 across 25 1TB SATA drives in a fly by night company SuperMicro type box + 1 spare).

    At one point IT did an array expansion adding drives & were unfamiliar with the array card. It corrupted our data (hundreds of thousands of files randomly re-assigned to different folders, 10s of thousands of corrupted files etc) (HP supports live expansion but this array controller did not). We were down for months, the fallout for not finding everything followed us for years. It almost destroyed us.

    I am fine being down for a full week - or two weeks if I have to restore (haven't in 5 years). But offsite backup = insurance policy. I don't trust array controllers or a single server setup any more than I do a hard drive.



  • @DustinB3403 said in Safe to have a 48TB Windows volume?:

    Doesn't ntfs have a limit of 16TB per volume?

    it depends on the cluster size. I just created a 30TB volume the other day.



  • That's not really a fair example, though. While it is good not to trust things, an induced failure (that's how expansion works) is not a good way to judge reliability. That's like driving into a sign, but then not trusting the autonomous steering. There is good reason to not trust the robot driver, but you can't distrust it based on driving into the sign yourself 🙂



  • @jim9500 backing up a full 48tb to b2 would be something like $247/month



  • @DustinB3403 said in Safe to have a 48TB Windows volume?:

    Doesn't ntfs have a limit of 16TB per volume?

    NTFS volume limit is 256TB in older systems.

    NTFS has an 8PB volume limit in modern ones.



  • @DustinB3403 said in Safe to have a 48TB Windows volume?:

    @jim9500 backing up a full 48tb to b2 would be something like $247/month

    But with fees if you want to retrieve it. Just be aware that that can be pretty large.



  • @travisdh1 said in Safe to have a 48TB Windows volume?:

    The triple mirror means that you will have increased read speed. If you don't need the increased read speed, then that's just a waste of drives.

    It does (sortof) decrease my risk as I would need 3 drives out of any set of 3 to fail. I understand this looks like overkill. It also helps on read speed. Prior to this array I was using 36 600GB 15K SCSI. My goal was similar speed + safer setup + bigger volume. The difference in cost between using raid 10 & 10 ADM using 3TB drives is only about $2,000.

    @scottalanmiller said in Safe to have a 48TB Windows volume?:

    But it is darn close when using triple mirroring!

    FWIW - you're the reason I migrated to Raid 10 off of my Raid 6 / 36 drive setup. Lots of yelling at me on Spiceworks a few years ago about how raid 6 isn't safe for huge arrays 😛



  • @jim9500 it decreases the risk by a lot. You might already have risk so low that you don't care, but it certainly decreases it a lot more. 🙂



  • This post is deleted!


  • @scottalanmiller said in Safe to have a 48TB Windows volume?:

    You can go to Linux and XFS without changing your RAID in any way.

    Ah perfect. So I wouldn't need to move to software raid to move away from NTFS. I'm not convinced I need to yet. But if after more research I find out I do - Is it likely I'm going to run into issues using something like SAMBA + XFS as a windows shop network share?



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    @scottalanmiller said in Safe to have a 48TB Windows volume?:

    You can go to Linux and XFS without changing your RAID in any way.

    Ah perfect. So I wouldn't need to move to software raid to move away from NTFS. I'm not convinced I need to yet. But if after more research I find out I do - Is it likely I'm going to run into issues using something like SAMBA + XFS as a windows shop network share?

    Definitely not. You can use hardware RAID anytime. There are no cases where you can't use hardware RAID.

    Well there are, but those are cases where you use hardware that doesn't provide hardware RAID.



  • XFS will present no issues to you as a Windows show. Samba can be a pain to manage, but keep in mind that most NAS products use Samba to talk to Windows. So it is pretty solid when set up correctly.



  • @scottalanmiller said in Safe to have a 48TB Windows volume?:

    There are no cases where you can't use hardware RAID.

    Yea - for some reason I was thinking I would need to use ZFS. I'd prefer to stick to the enterprise hardware as it's caused 0 issues for me.



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    @scottalanmiller said in Safe to have a 48TB Windows volume?:

    There are no cases where you can't use hardware RAID.

    Yea - for some reason I was thinking I would need to use ZFS. I'd prefer to stick to the enterprise hardware as it's caused 0 issues for me.

    Even if you used ZFS, you can use hardware RAID. By definition, all file systems must work the same on hardware RAID as they do on bare metal drives. If they didn't, it means that the hardware RAID isn't working.



  • @jim9500 said in Safe to have a 48TB Windows volume?:

    It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.

    @Donahue - Yes. I have a similar setup offsite backup several miles away for disaster recovery / hardware failure etc. I know raid != backups.

    What's the air-gap to protect against an encryption event if any?



  • @scottalanmiller said in Safe to have a 48TB Windows volume?:

    @DustinB3403 said in Safe to have a 48TB Windows volume?:

    @jim9500 backing up a full 48tb to b2 would be something like $247/month

    But with fees if you want to retrieve it. Just be aware that that can be pretty large.

    Unless your storage provider doesn't charge for downloads... (Wasabi is one -- https://wasabi.com/pricing/)



  • That's a lot of disks for such a small array.

    I'd just put 6x12TB drives in RAID6 and put it on something that has at least 16x3.5" drive bays.
    That way you have enough space to make a new array and transfer the data when it's time to upgrade the storage.

    I'd very much prefer linux over windows for fileserver use and software raid over hardware. It's easier to have the data survive several generations of hardware as you can mount the old drives directly on a new server without problems. It becomes hardware and linux distro/version agnostic.

    For our own use we like Supermicro hardware because they are modular as well. Supermicro sells their stuff as components as well as complete servers which makes it very flexible. Standard-sized server motherboards for instance means you can replace a motherboard without having to source exactly what you had. And you don't have to use their branded memory or branded disks.



  • @PhlipElder said in Safe to have a 48TB Windows volume?:

    What's the air-gap to protect against an encryption event if any?

    What's the air-gap to protect against an encryption event if any?

    My backup server has access to the rest of the network - but it pulls the backups to itself vs backups being pushed. The rest of the network can't directly write to it. My backups happen weekly - so my (hope) is that I would recognize what was happening to my live network before it was backed up.

    I have been contemplating doubling my backup storage space to make sure I have enough space to store older file revisions in a ransomware situation.



  • @PhlipElder said in Safe to have a 48TB Windows volume?:

    @jim9500 said in Safe to have a 48TB Windows volume?:

    It seems like I remember Scott Miller talking about combining enterprise hardware + SAS/SATA Controller + Linux for storage requirements vs proprietary hardware raid controller.

    @Donahue - Yes. I have a similar setup offsite backup several miles away for disaster recovery / hardware failure etc. I know raid != backups.

    What's the air-gap to protect against an encryption event if any?

    LOL. I like that term. "Encryption Event"

    It implies, quite correctly, that many of those problems are not exactly malware. Many are just bad system design.



  • @dafyre said in Safe to have a 48TB Windows volume?:

    @scottalanmiller said in Safe to have a 48TB Windows volume?:

    @DustinB3403 said in Safe to have a 48TB Windows volume?:

    @jim9500 backing up a full 48tb to b2 would be something like $247/month

    But with fees if you want to retrieve it. Just be aware that that can be pretty large.

    Unless your storage provider doesn't charge for downloads... (Wasabi is one -- https://wasabi.com/pricing/)

    Right, was talking about B2.


Log in to reply