ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. colejame115
    C
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 3
    • Best 1
    • Controversial 0
    • Groups 0

    colejame115

    @colejame115

    1
    Reputation
    29
    Profile views
    3
    Posts
    0
    Followers
    0
    Following
    Joined Last Online

    colejame115 Unfollow Follow

    Best posts made by colejame115

    • ZFS Planning with Heterogeneous Gear

      @scottalanmiller
      (First thing, I ask all questions as a student, by no means an expert.) I've been thinking of a hypothetical storage server that I might try to setup for personal use. I really don't want to use FreeNAS as I like the option for more granular control. I also do want to use ZFS cause it sounds quite resilient and interesting, when setup up correctly - obviously the key to it working well. Something I've seen a lot of when discussing ZFS includes both the hardware RAID issue and mixing of drive sizes/vdev drive numbers / vdev types. It seems almost all of the reasons people say not to mix those things is performance related based on HDDs.

      Here's my hypothetical scenario. Say we have a zpool currently with two vdevs, both of which are mirrors. Vdev1 has one 2TB ssd and a logical 2TB disk created from a mdadm RAID 0 of two 1TB ssds. Vdev2 has one logical 2TB volume created from two 1 TB ssds and one logical 2 TB volume created from four 500GB ssds. Would the logical disks whether from a hardware RAID or md RAID cause major issues with ZFS in regards to error checking/healing/scrubbing? And what about resilvering, would this be difficult due to the logical disks? I know overall that there would be some performance overhead, but I'm assuming ssds will make it tolerable where with hdds it wouldn't be. I also don't see the logical RAID 0 arrays being a major point of failure because of how much more resilient ssds are compared to hdds. I'd say over-provision all the drives by like 15% to allow for slightly better performance and wear-leveling.

      I don't know if this would be okay in a production environment, but based on my understanding, for home-use it should work fine.

      posted in IT Discussion
      C
      colejame115

    Latest posts made by colejame115

    • RE: ZFS Planning with Heterogeneous Gear

      @scottalanmiller
      The reason I was mixing md raid and ZFS was I didn’t think ZFS allowed other ZFS devices to be used under a vdev. To accomplish this, would one need multiple zpools ?

      posted in IT Discussion
      C
      colejame115
    • RE: ZFS Planning with Heterogeneous Gear

      @scottalanmiller
      Yeah I was pretty sure this would be a weird setup- the primary reason for the abstraction of the logical disks is too do exactly what you say by using old ssds that have no other use. Is there a better way to do this? Instead of abstracting with md RAID, what would you suggest?

      posted in IT Discussion
      C
      colejame115
    • ZFS Planning with Heterogeneous Gear

      @scottalanmiller
      (First thing, I ask all questions as a student, by no means an expert.) I've been thinking of a hypothetical storage server that I might try to setup for personal use. I really don't want to use FreeNAS as I like the option for more granular control. I also do want to use ZFS cause it sounds quite resilient and interesting, when setup up correctly - obviously the key to it working well. Something I've seen a lot of when discussing ZFS includes both the hardware RAID issue and mixing of drive sizes/vdev drive numbers / vdev types. It seems almost all of the reasons people say not to mix those things is performance related based on HDDs.

      Here's my hypothetical scenario. Say we have a zpool currently with two vdevs, both of which are mirrors. Vdev1 has one 2TB ssd and a logical 2TB disk created from a mdadm RAID 0 of two 1TB ssds. Vdev2 has one logical 2TB volume created from two 1 TB ssds and one logical 2 TB volume created from four 500GB ssds. Would the logical disks whether from a hardware RAID or md RAID cause major issues with ZFS in regards to error checking/healing/scrubbing? And what about resilvering, would this be difficult due to the logical disks? I know overall that there would be some performance overhead, but I'm assuming ssds will make it tolerable where with hdds it wouldn't be. I also don't see the logical RAID 0 arrays being a major point of failure because of how much more resilient ssds are compared to hdds. I'd say over-provision all the drives by like 15% to allow for slightly better performance and wear-leveling.

      I don't know if this would be okay in a production environment, but based on my understanding, for home-use it should work fine.

      posted in IT Discussion
      C
      colejame115