ZFS is Perfectly Safe on Hardware RAID



  • This is sadly one of those articles that is needed not because there is something to be learned but to dispel an unfounded and destructive myth that has entered into the collective consciousness of the IT industry, at least in the isolated SMB sector of it. At some point someone began saying that the FreeNAS team claimed that ZFS was not safe or ill advised to run on hardware RAID. But it is not true that FreeNAS / iX said this; nor is it true at all.

    Let's break down all of the problems with this and then look at the source of this misinformation.

    1. If ZFS was unsafe on hardware RAID, this would mean that ZFS was unsafe, period. The claim is nonsensical as it is always made by someone desperately promoting ZFS as a filesystem but trying to do so by claiming that ZFS is unreliable and could never be counted on. Clearly someone is confused.
    2. Storage abstraction works in such a way that this should never be a concern. Hardware RAID, or Software RAID, or LVM, or whatever present a "drive appearance" to upper layers. This abstraction and interface system is universal and total. Any working filesystem will work on any of these by definition.
    3. ZFS is used on top of hardware and software RAID in most cases, this is the standard deployment of it outside of massive Sparc architecture mini computers because anything that runs ZFS (FreeBSD, Ubuntu or Sparc AMD64) would all be expected to be VMs and not on bare metal, so ZFS' primary use and role is on top of other RAID.

    So where does this myth come from? Well, from what we can tell, it comes from one true, but marketing style statement from the FreeNAS folks that was worded correctly but in such a way that people who made false assumptions and added their own implications to the statement taking something true and making it very, very untrue. This was then combined with the "Chinese telephone" effect of people in an insular community repeating this misinformation second or third hand until it became lore and was then eventually believed even though obvious information and common sense would tell us it is not possibly true.

    Here is one of the key references used: http://www.freenas.org/blog/freenas-worst-practices/

    0_1483205677975_Screenshot from 2016-12-31 12-34-29.png

    In its latest version, the older statements have been moved from true to now quite misleading and clearly an open attempt at marketing. But let's break this down to be sure we understand why this is a vendor trying to make a sale and not engineers giving you valid information.

    1. This is about FreeNAS, not about ZFS. The information here makes some assumptions that make sense, given that this is a FreeNAS resource, but carrying this implication to other ZFS scenarios makes no sense.
    2. That FreeNAS is "designed to use its own volume manager" is totally fine, but FreeNAS is just FreeBSD with a web GUI and FreeBSD was also designed to be used on hardware RAID and do whatever you need it to do. It's equally designed for both, this is really just marketing fluff.
    3. That ZFS won't be able to "balance reads and writes" and such is, again, marketing. Of course it can't, because we are asking the RAID hardware to do that. This isn't a warning, it's just restating the original decision again. We could compress all that to "If you choose hardware RAID over ZFS RAID, you'll be using hardware RAID instead of ZFS RAID." It's a redundant point that is simply worded in such a way as to make it sound as if we obviously want one thing and not the other, but doesn't actually say that.
    4. Every time we hear that hardware RAID "might" or "could" do something, this is also marketing. Sure, you might buy bad hardware RAID that does a bad job, so they are using the "you might get it wrong if you don't do what we say" threat to make things sound scary when they are not. We might as well say "if you don't take a taxi to the store, you might walk off a cliff instead of going to the store"... okay, but let's just assume that I know how to walk and will actually walk to the store as the alternative.
    5. RAID cards mask SMART. Right, of course they do, so does ZFS. This is point #3. Just repeating the original point. The RAID card handles the SMART monitoring, handles the alerting, etc. There is nothing bad here, this is exactly what we presumably wanted in the first place. So all that is being said here is the good things about the hardware RAID carefully worded to make them sound bad. Marketing at its finest.
    6. Pass through or RAID 0 mode warnings are over the top and actively lying. They are using the assumption that you will use ZFS RAID even though you chose a hardware RAID card instead and use that insane assumption to state that hardware RAID is therefore bad. This isn't logical and is outright incorrect. They are right that using passthrough or RAID 0 mode on hardware RAID is a bad idea; but that's not what we were discussing doing so this is a warning for someone else about something else.
    7. Their summary is that using something other than ZFS "can" lead to problems. Of course it can. Just like "not taking the tax" "can" lead to you walking off of a cliff. They are not able to produce any problems with the alternatives, instead they are just stating that in the pool of "all possible alternatives" some of them are bad. Very cheesy marketing BS, way past the point of insulting to anyone that works in IT. We should all be offended by the way that this portion of this document is handled.
    8. The warnings about data loss from hardware RAID is based on risks from incorrectly configured hardware RAID. The same warning applies to incorrectly configured ZFS. So this has no purpose other than to be misleading.
    9. The assumptions are often made that if you are using ZFS for one feature, you must want it for all features. This is ridiculous. ZFS is a combination of three discrete products rolled into one: a filesystem, a logical volume manager and a software RAID system. The desire to use ZFS for one or two of these components does not suggest any need or desire to use it for the other one or two. That is a false and illogical assumption.

    Clearly, this document has an agenda to "sell" ZFS and a community has sprung up around FreeNAS and ZFS that has carried this banner and has drunk the koolaid and is repeating this mantra recklessly and incorrectly - often far less correctly than this document which mostly veils the marketing and explains, with leading words, why ZFS on hardware RAID is fine.

    Summary: There is absolutely zero cause for concern when using ZFS on hardware RAID. It is a filesystem like any other, it works exactly as expected. Everything here is marketing.

    Resources:

    http://www.smbitjournal.com/2014/05/the-cult-of-zfs/
    http://www.smbitjournal.com/2016/06/what-is-drive-appearance/
    https://mangolassi.it/topic/76/open-storage-solutions-at-spiceworld-2010
    https://mangolassi.it/topic/11276/scott-alan-miller-storage-101
    https://mangolassi.it/topic/12043/why-the-smb-still-needs-hardware-raid
    http://www.smbitjournal.com/2015/07/the-jurassic-park-effect/



  • To be clear, ZFS is an amazing file system and has a lot to offer. In the rare circumstance where you: actually want or need ZFS, actually want ZFS on bare metal rather than in a VM and don't need the features (like blind swap) of hardware RAID then ZFS could be an excellent choice of filesystem. But it competes with the likes of HammerFS and BtrFS as well, it is not alone in the high end, non-clustered filesystem space that are tackling the "merger" of layers. ZFS is the granddaddy of these filesystems and is the most mature.

    But ZFS should not be the most common choice. Some questions to ask before deploying ZFS for RAID:

    • Why is a system running ZFS going to bare metal?
    • What is the goal of ZFS in this instance and does it meet it as well or better than other filesystems and/or hardware RAID?
    • Why am I deploying RAID to bare metal rather than RAIN?


  • To really, really make a point. I'm going to take the FreeNAS "recommendations" page and reverse it if it were to be written from the perspective of the hardware RAID. This is a good tactic for anytime that you are trying to determine what is good engineering advice versus what is just marketing. Marketing often uses leading statements that include an unstated assumption and allow you to convince yourself that the assumption must be true when the marketer said nothing of the sort. This effect is so strong that many people consider it lying, but cannot identify the lie itself.

    Here is the same best practices article, flipped:

    Using Software RAID with your Hardware RAID Card

    When setting up a RAID array, it has been common knowledge that software RAID is preferable to hardware RAID. This is something of a misconception as all RAID is software RAID under the hood. If you are using software RAID, it lacks its own operating system, processor and cache. This can be a good idea sometimes, but this causes you to lack certain key benefits, such as blind swap and requires you to use your main CPU and memory for storage tasks that could otherwise be offloaded to a dedicated device.

    Our Hardware RAID Card is designed to communicate directly with your disks using its own RAID and volume manager layers. Hardware RAID includes a sophisticated yet efficient strategy for providing various levels of data redundancy, including the mirroring of disks and hardware equivalents of "ZFS" RAID levels like RAID 5 and higher with the ability to lose up to 128 disks in an array! If the given set of disks is not provided to the hardware RAID card, the card will be unable to utilize its cache, processor and will not be able to balance read and write operations. Software RAID, like ZFS, typically rebuild disks in a linear manner from beginning to end without any regard for their actual contents.

    The lack of the "one big disk" from hardware RAID will severely limit the RAID cards advantages and the lack of battery or flash backing of cache is how risks get introduced. Hardware RAID works carefully to guarantee that every write it receives from the operating system or hypervisor is protected and will get to the disk, even if there is power loss. It does this by having its own power protection and keeping data live in the cache even if the system loses power. By eliminating volatile storage layers found in software RAID solutions, hardware RAID protects against data loss from missing or corrupt writes during a power loss.

    Finally, hardware RAID handles SMART disk health status information that each disks provides. Very simply, hardware RAID handles all disk health, monitoring and alerting functions so that installation, configuration and trust of third party software running "somewhere up the stack" is not necessary. This ensures that alerts can reach administrators even if utilities like smartctl do not exist, are misconfigured or have failed. Hardware RAID continues to work even when the operating system or hypervisor do not or if they lack the necessary tools for the job. Without access to this information, the user is left unaware of classic warning signs of impending disk failure, like reallocated sector count or unusually high temperature. Even the time it takes to run smartctl can be indicative of an impending problem.

    While some hardware RAID cards may have a “pass-through” or “JBOD” mode that simply presents each disk to ZFS, this is wasteful and requires the same work to be done at two different layers without benefit. Leveraging hardware RAID as designed gives you the peace of mind that you have blind swap, fully hot swap supported hardware, direct enterprise SMART monitoring, compatibility at all storage layers, resource offloading and easy management without needing experts who must be trusted to get all manner of RAID, LVM, monitoring and other functions right.

    Long story short, using software RAID can lead to anything from corrupted writes to fatal errors that require you to invest in costly data recovery services.



  • As silly as that information reads, read it carefully. It is all true and all says exactly the opposite of what FreeNAS said to promote ZFS. If the "facts" that make ZFS sound good also make it sound bad, we just found a useless marketing document from a vendor with an agenda to push. None of this is to say that ZFS is bad, that hardware RAID should always be used, or anything of the sort. The points being made are simply these:

    • It is easy to mislead people with simply marketing that allows them to draw their own conclusions without actually needing to lie.
    • Vendors have an agenda and you should never go to them for general IT advice.
    • ZFS on hardware RAID is 100% okay and good.
    • ZFS without hardware RAID is also 100% okay and good.
    • Repeating statements like those from the marketing document without an actual understanding of what they say will lead to a lack of context and what might have been true in the wording of the marketing document will easily be untrue when repeated with a different context or change of wording.


  • So one final, important point about how this came to be.

    The reason that such a horribly misleading and agenda-filled marketing document was able to get such a run and get so much repetition and support might seem hard to understand until we investigate the Jurassic Park effect of the NAS OS world.

    This document comes from FreeNAS, a NAS OS vendor. They make a good product, but it is still a NAS OS, fundamentally a bad idea category of product. The customers of FreeNAS are, basically by definition, not experienced storage or systems engineers; if they were they would not be expected to use a product of this nature as systems like FreeBSD, Solaris, Ubuntu and Suse do all of the same things but do them better. The use of FreeNAS increases risk and overhead. The sole selling point of FreeNAS over FreeBSD specifically is that it is "faster to get up and running for someone that doesn't know FreeBSD." It's that simple.

    So the community around FreeNAS is, essentially by definition, one of non-storage experts and non-systems experts who all bounce ideas off of each other but none of them are experts in those areas. This results in horrific information sharing as there is little to no oversight and misinformation that would be quickly corrected in a community for storage and systems people rapidly become myths and lore in a community for whom storage is a "magic black box." FreeNAS and ZFS have become more like a cargo cult in nature than as systems for IT professionals.

    Cargo cults are those cults in the south Pacific islands where a religion has grown up around military or cargo ship vessels dumping or losing cargo and since anything over the immediate horizon is "heaven" and outside of their scope, the source of the boxes of food or clothing is worshiped as a deity, even though it is just humans on ships losing boxes.

    This is a risk in any community, of course, but most communities, like ML or SpiceWorks for example, draw professionals from all levels and encourage peer review. There is an expectation for voices of reason to step in. But in a community that specifically eliminates all experts from practical participation, and especially one that is vendor focused and so carries a very specific promotional agenda in addition to support, there is a totally different effect and there is no reason to expect good data to arise from it.



  • So is ZFS designed to work on JBOD and is it safer than running it on top of a hardware raid platform? What kind of reliability is expected from each?



  • @Grey said in ZFS is Perfectly Safe on Hardware RAID:

    So is ZFS designed to work on JBOD and is it safer than running it on top of a hardware raid platform?

    ZFS as a filesystem isn't designed to run one place or another. It's just a filesystem. Works like any other.

    ZFS as a RAID platform is meant to replace hardware RAID with software RAID. Just like any other software RAID.

    So if you want to use ZFS as your RAID system, you don't get hardware RAID. But if you want to use hardware RAID then you use hardware RAID. This decision is independent to if ZFS will or will not be used. ZFS is three different products mashed into one, which is how the "marketing" makes things confusing.



  • @Grey said in ZFS is Perfectly Safe on Hardware RAID:

    What kind of reliability is expected from each?

    Totally depends. All RAID controllers are not equal. And all setups of ZFS are not equal. Here are some differences:

    • ZFS Software RAID supports RAID 7, nothing else does. So that is a unique feature of using it for RAID. But that only matters if you want RAID 7. Very few people do. So normally this doesn't matter.
    • If you are using non-business class hardware RAID, you have a write hole protection advantage with ZFS. But if you are doing this, you didn't care about reliability anyway, so this advantage doesn't matter. No business class hardware RAID is subject to the write hole, so this advantage is purely academic for ZFS.
    • Any business class hardware RAID has either battery backed or non-volatile cache options that ZFS cannot do as this requires hardware. So hardware RAID has safety advantages unless ZFS is run without cache, then hardware RAID has cache advantages.
    • ZFS has advanced checksumming that is rare or impossible to find in hardware RAID which is useful if the data is becoming corrupt on disk, not something we are normally concerned about but which is a nice feature.
    • All business class hardware RAID does blind swap, a pretty massive protection feature in the SMB market.

    Remember that there is nothing stopping your hardware RAID from implementing ZFS itself as its RAID option. So it is literally impossible for software ZFS RAID to offer something that hardware RAID "can't" do. All we ever compare are specific implementations. So ZFS software RAID set up as X compared to Adapter controller Z setup as Y, for example.

    A cool academic exercise would be to build a ZFS based hardware RAID card. That would be pretty cool. Then you could add non-volatile RAM to ZFS itself.



  • That list makes hardware RAID sound safer than ZFS, which is probably not quite true. But is the case is that the average implementation of hardware RAID is quite a bit safer than the average implementation of ZFS software RAID. Hardware RAID "handles everything for you" protecting you from most bad decisions. ZFS leaves all the nitty gritty details up to you which makes it super, duper easy to mess something up and leave yourself vulnerable. This is exacerbated by the Cult of ZFS problem and loads of misinformation swirling about its use. So the average person using ZFS is not even remotely prepared for what is needed to use it safely.

    Some problems that we see people have when using ZFS without fully understanding storage:

    • Believing that ZFS doesn't use RAID (this is extremely common.)
    • Believing that RAIDZ is magic, rather than a brand name, and that normal RAID concerns do not apply. So we often see people implement RAID 5 in reckless, insane situations using "it's RAIDZ" as an excuse as if RAIDZ isn't just RAID 5 - literally just a brand name for RAID 5.
    • Treating common features common to all RAID systems as "unique" and believing that ZFS has feature after feature of protection that makes the need to protect against storage failure unnecessary.
    • Not understanding hot swap and blind swap differences and creating systems that they do not know how to address should a drive fail.
    • Believing that ZFS being magic is not at risk from power loss and failing to protect caches from power issues - something that they are not normally used to dealing with as hardware RAID does this for you.
    • Not understanding the CPU and memory needs of ZFS, especially with features like dedupe and RAIDZ3.
    • Ignoring common RAID knowledge and thinking that using ZFS means not using mirroring technologies.