ZFS Planning with Heterogeneous Gear



  • @scottalanmiller
    (First thing, I ask all questions as a student, by no means an expert.) I've been thinking of a hypothetical storage server that I might try to setup for personal use. I really don't want to use FreeNAS as I like the option for more granular control. I also do want to use ZFS cause it sounds quite resilient and interesting, when setup up correctly - obviously the key to it working well. Something I've seen a lot of when discussing ZFS includes both the hardware RAID issue and mixing of drive sizes/vdev drive numbers / vdev types. It seems almost all of the reasons people say not to mix those things is performance related based on HDDs.

    Here's my hypothetical scenario. Say we have a zpool currently with two vdevs, both of which are mirrors. Vdev1 has one 2TB ssd and a logical 2TB disk created from a mdadm RAID 0 of two 1TB ssds. Vdev2 has one logical 2TB volume created from two 1 TB ssds and one logical 2 TB volume created from four 500GB ssds. Would the logical disks whether from a hardware RAID or md RAID cause major issues with ZFS in regards to error checking/healing/scrubbing? And what about resilvering, would this be difficult due to the logical disks? I know overall that there would be some performance overhead, but I'm assuming ssds will make it tolerable where with hdds it wouldn't be. I also don't see the logical RAID 0 arrays being a major point of failure because of how much more resilient ssds are compared to hdds. I'd say over-provision all the drives by like 15% to allow for slightly better performance and wear-leveling.

    I don't know if this would be okay in a production environment, but based on my understanding, for home-use it should work fine.



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    Something I've seen a lot of when discussing ZFS includes both the hardware RAID issue and mixing of drive sizes/vdev drive numbers / vdev types. It seems almost all of the reasons people say not to mix those things is performance related based on HDDs.

    Performance, complexity, capacity, and reliability. But performance is normally the big one, although you can accidentally lose a ton of capacity, too. I've seen people lose 80% of their capacity because they mixed sizes.

    This video is a good starting point. RAID is RAID, so all of that applies equally to ZFS.

    Youtube Video



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    I also do want to use ZFS cause it sounds quite resilient and interesting, when setup up correctly

    ZFS is good. It's resilient, but not dramatically so. ZFS offers no significant advantages over other RAID implementations. It's still just RAID. Essentially every "feature" that people talk about with ZFS is simply a standard RAID feature that all enterprise RAID has long had. That's not to say that ZFS is bad in any way, just that basically everything good about it is just a stock feature and nothing to do with ZFS.



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    Vdev1 has one 2TB ssd and a logical 2TB disk created from a mdadm RAID 0 of two 1TB ssds.

    So this is crazy complex just for the first leg. Three drives, a stand alone physical, and a RAID 0 pair. Will this work? Sure. And it will get you essentially all of your capacity. Performance will be weird, no matter how you do it, mixing drives takes you to the lowest common speed of both.

    Vdev2 has one logical 2TB volume created from two 1 TB ssds and one logical 2 TB volume created from four 500GB ssds.

    So two RAID 0s, of mixed sizes, together in a RAID 1. Yeah, it'll work. But weird.

    First though.... why use MD RAID to make the underlying software RAID, but ZFS on top? Why use two different software RAID systems in a single implementation? That's more to fail, more code to have in RAM, less transparency, more stuff that you need to know, monitor and manage.



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    Would the logical disks whether from a hardware RAID or md RAID cause major issues with ZFS in regards to error checking/healing/scrubbing?

    Yes, regardless of if it is hardware RAID, software RAID, logical volume manager, or any other abstraction mechanism, if there is something creating "fake" storage between ZFS and the hardware, then ZFS cannot do any of that scrubbing or monitoring. It doesn't know what is going on down there.

    However, hardware RAID, MD RAID, or whatever all do monitoring and scrubbing. That's not a feature of ZFS, it's a standard feature of all storage systems.



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    And what about resilvering, would this be difficult due to the logical disks?

    No, ZFS wouldn't know the difference. It just resilvers to its underlying components. In turn, if an underlying RAID failed that has redundancy, it would rebuild without ZFS knowing the difference. Each is completely independent of the other.



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    I know overall that there would be some performance overhead, but I'm assuming ssds will make it tolerable where with hdds it wouldn't be.

    It'll use extra CPU for all of the extra RAID, but won't be terrible. It's a bizarre setup that won't get you anywhere near the value you'd hope for from all of those drives, and will cause some drives to work way harder than others.

    So... will it work? Yes.

    Will it be faster than spinning drives? Certainly

    Does it make sense to do at home because you have lots of odd drives laying about unused? Maybe.

    Is it weird? Yup 🙂



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    I don't know if this would be okay in a production environment

    Yeah, in no way would you ever do this in production. The amount of documentation and training needed to have anyone maintain this would be unreal.



  • @scottalanmiller
    Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    @scottalanmiller
    Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?

    Thems the problem having a ton of smallish sized storage lying around - what to do with it?
    39acf1be-59fd-4b4f-981e-5e3a4a96aabf-image.png



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    @scottalanmiller
    Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?

    Not really, the "better" way is to buy new disks. For free, the overall structure you are proposing kinda works.

    To make it WAY better, do the whole thing, top to bottom, in either MD RAID or ZFS, but not both. Either one will do what is needed just fine. MD RAID lets you use any file system that you want, ZFS dictates that the file system has to be ZFS. Other than that, it's all just RAID 0 and RAID 1 layered so both are essentially identical. But don't mix, just use one.



  • @Dashrender said in ZFS Planning with Heterogeneous Gear:

    @colejame115 said in ZFS Planning with Heterogeneous Gear:

    @scottalanmiller
    Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?

    Thems the problem having a ton of smallish sized storage lying around - what to do with it?
    39acf1be-59fd-4b4f-981e-5e3a4a96aabf-image.png

    Company hardware? Simply shred/destroy it. There is no reason to keep it.

    Personal gear? I'd sell it for a few $ to get rid of it.



  • @JaredBusch said in ZFS Planning with Heterogeneous Gear:

    @Dashrender said in ZFS Planning with Heterogeneous Gear:

    @colejame115 said in ZFS Planning with Heterogeneous Gear:

    @scottalanmiller
    Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?

    Thems the problem having a ton of smallish sized storage lying around - what to do with it?
    39acf1be-59fd-4b4f-981e-5e3a4a96aabf-image.png

    Company hardware? Simply shred/destroy it. There is no reason to keep it.

    Personal gear? I'd sell it for a few $ to get rid of it.

    it is company hardware - no need to pay to shred it though - these came out of brand new machines (cheaper to put my own SSD in instead of buying one from the factory), so there is no old data to worry about. I do have old used drives shred though when needed.

    I probably can sell these, like $10/ea... be gone!



  • @Dashrender said in ZFS Planning with Heterogeneous Gear:

    I probably can sell these, like $10/ea... be gone!

    But it is not likely worth the company time to deal with you setting up an account someplace tot sell them. taking payments, etc.

    Pay to shred them and be done.



  • @scottalanmiller
    The reason I was mixing md raid and ZFS was I didn’t think ZFS allowed other ZFS devices to be used under a vdev. To accomplish this, would one need multiple zpools ?



  • @colejame115 said in ZFS Planning with Heterogeneous Gear:

    @scottalanmiller
    The reason I was mixing md raid and ZFS was I didn’t think ZFS allowed other ZFS devices to be used under a vdev. To accomplish this, would one need multiple zpools ?

    MD RAID can definitely do layers upon layers. But ZFS does this, it's how ZFS does RAID 10, for example, or RAID 60.

    https://www.cyberciti.biz/faq/how-to-create-raid-10-striped-mirror-vdev-zpool-on-ubuntu-linux/


Log in to reply