When do we do away with Hardware RAID
-
Hardware RAID is a full processor and memory and all RAID processing happens in dedicated hardware that does not rely on on the core CPU and system memory. It's a completely hardware abstracted, encapsulated and isolated storage subsystem that is interfaced to the computer as a single, discrete block device.
Software RAID is a software based storage abstraction that utilized the core CPU and system memory for RAID functions and abstracts and encapsulated the block devices after they have been attached to the OS.
-
This is a huge question and depends on a huge range of needs. Hardware RAID is the slower of the two and less flexible, but it is far easier to use and manage, especially in an SMB or in an enterprise using a datacenter staffed by bench pros. Hardware RAID offloads CPU and memory from the main system (up to about 10% of a core and up to about 2GB of RAM currently) and provides blind swapping which is pretty critical for SMBs and datacenter usage and has power loss prevention - typically via non-volatile caching.
Sofware RAID requires you to handle all of the power protection externally to the chassis and pretty much precludes blind swapping requiring a knowledgeable system admin to be on hand for even the most trivial drive replacement. Software RAID leverages unused core CPU and memory for more processing power than a RAID card can realistically do. Software RAID is more flexible and has more options (RAID 7 is unique to software RAID, for example.)
Software RAID is obviously cheaper in capex but requires more effort and experience. Hardware RAID costs more and is marginally slower but makes hardware support vastly easier.
-
So lets just imagine an infrastructure that has fault tolerant power system to run their systems.
Everything from the Servers to the motion activated hand dryers.
This business is looking to do a full system refresh from the ground up. You @scottalanmiller have been asked to design the system to reduce capex as much as possible for this project.
How / what would you recommend? What techinical hurdles would the IT Team have to overcome in the event of a failed array. Single disks failures in a Software Array deployment.
I'm just trying to think up possible use cases of an all Software RAID scenario. Hardware RAID has been tried and tested, it works, it's "simple to support" in comparison to Software RAID.
Things of critical importance would be Array Performance.
Array recoverability
capex for the project
-
@DustinB3403 said:
I'm just trying to think up possible use cases of an all Software RAID scenario. Hardware RAID has been tried and tested, it works, it's "simple to support" in comparison to Software RAID.
Not as tried and tested as software RAID. Software RAID is older (everything happens there first) and has been used in the big enterprise far more as all high end systems are and always have been software RAID (Sparc, Integrity / Itanium, SuperDome, VMS, AIX, HP-UX, IRIX, zOS, System i, etc.)
Hardware RAID for the longest time was for the tiny end of the SMB only. It is only quite recently that hardware RAID has been considered adequate for mid-end processing loads and it is almost exclusively from the AMD64 servers pushing their way into bigger and bigger roles, not from hardware RAID earning its way up.
-
@DustinB3403 said:
How / what would you recommend? What techinical hurdles would the IT Team have to overcome in the event of a failed array.
They would need to have systems administration resources available to do the drive swaps from the logical side rather than having only bench staff in the data center. You could never do the swap using just "remote hands", it requires active coordination with the person(s) controlling the RAID system who will deactive the drive and activate the new one.
-
So you are saying that software raid does not support
hotblind swap? -
@dafyre I don't think it can.
There is no way that the hardware could prepare for a disk being ejected live without hardware to manage it that.
-
@DustinB3403 said:
@dafyre I don't think it can.
There is no way that the hardware could prepare for a disk being ejected live without hardware to manage it that.
The idea would be that the hardware supports hot swap. As I seem to remember @scottalanmiller mentioning recently that pretty much any business class server should support hot swap at a minimum.
-
Here is the definition.
http://mangolassi.it/topic/6816/hot-swap-vs-blind-swap/2
Now which applies to MDADM RAID? If Cold swapping is the only way of swapping drives, then I guess it immediately excludes it from any Enterprise or even business solution.
-
@dafyre said:
So you are saying that software raid does not support
hotblind swap?Some do, but it is super rare. I've never seen it outside of a NAS.
-
@dafyre said:
@DustinB3403 said:
@dafyre I don't think it can.
There is no way that the hardware could prepare for a disk being ejected live without hardware to manage it that.
The idea would be that the hardware supports hot swap. As I seem to remember @scottalanmiller mentioning recently that pretty much any business class server should support hot swap at a minimum.
Hot swap any business class RAID will handle. But blind swap, not so much.
-
@DustinB3403 said:
Here is the definition.
http://mangolassi.it/topic/6816/hot-swap-vs-blind-swap/2
Now which applies to MDADM RAID? If Cold swapping is the only way of swapping drives, then I guess it immediately excludes it from any Enterprise or even business solution.
MD RAID (MDADM is the management utility for MD RAID) is hot swap, of course, and some vendors like ReadyNAS and Synology add their own extensions to add blind swapping. No one would even discuss it if it wasn't hot swap.