When is SSD a MUST HAVE for server? thoughts? Discussion :D
-
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
-
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
Nope, sorry
I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
Nope, sorry
I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.
OK that makes sense.
Does Hyper-V, ESXi support this? I'm guessing that XS and KVM do, they can use ZFS for their file system of the VM storage (I'm assuming).
-
They all do to some degree, but all very differently.
-
Any opinions on VSAN's that have SSD caching? I mean, they give you a lot of other stuff, but what would you get in terms of performance?
-
@ardeyn said:
Any opinions on VSAN's that have SSD caching? I mean, they give you a lot of other stuff, but what would you get in terms of performance?
Good question for @original_anvil and he does this. But it gives you a ton, the same as you would get, more or less, with any caching system. Getting high performance cache close to where it is used (the closer the better) the bigger the performance leap. VSAN has the same bottlenecks from the disks that any other storage technology does. If your VSAN is pure SSD, then an SSD cache would do pretty little (nothing) but if your VSAN is spinning disks, then an SSD cache would have the normal acceleration advantages.
If you were willing to have your SSD cache do write commits without getting data flushed to the VSAN and replicated to other nodes, you could get insane performance improvements, of course, but that would come with extreme risk that would pretty much defeat the VSAN's purpose. But from a read perspective, the speed ups are identical to any other.
-
@scottalanmiller Thanks for bringing me in!
@ardeyn So, yeah, as Scott said, StarWind Virtual SAN (aka StarWind VSAN), allows using SSDs as one of the tiers of the cache, Level 2 to more exact. So, combination of RAM as the L1 caching and Flash cache gives really good performance boost. The exact numbers actually depends on the workload set, so I just don`t want to misslead you here. BTW, the data within the cache synchronizes across all the nodes, so we are free to claim that we do Fault Tolerance in the cache level. Anyway, here is a bit more information about Server Side caching:
https://www.starwindsoftware.com/caching-pageLet me know if there is anything else that I might be useful for you.
-
@original_anvil That seems pretty interesting. Does look like a good alternative to all flash, if on a tight budget.
-
SSD cache is almost always a great alternative to all flash. All flash, unless it is extremely cheap, generally does not deliver that much value (special case databases not withstanding.) SSD caching is extremely effective and generally very cheap in comparison to all flash. So something like 90% of the performance gain while something like 30% of the increase in cost. A good tradeoff nearly all of the time.
-
@scottalanmiller Happen to know if you can enable cachecade on a Dell R720XD after you have an array created on the main drives?
-
@wrx7m said:
@scottalanmiller Happen to know if you can enable cachecade on a Dell R720XD after you have an array created on the main drives?
After? No, not sure about that. The xByte team would know that one.
-
Thanks for the mention @scottalanmiller , @wrx7m - Just talked to the engineers. Short answer is yes. If you put the SSDs into the rear backplane, the system will automatically ask if you want them to be cachecade disks when you configure them. If you add then into other slots, you can change them into a cahcecade array when you are editing the controller settings. You press F2 to select the type on the settings.
-
That's awesome @Lyndsie_xByte thanks for following up so quickly.
-
@Lyndsie_xByte Thank you! That was very quick! I didn't even know you could cachecade on the front.
-
@Lyndsie_xByte said:
Thanks for the mention @scottalanmiller , @wrx7m - Just talked to the engineers. Short answer is yes. If you put the SSDs into the rear backplane, the system will automatically ask if you want them to be cachecade disks when you configure them. If you add then into other slots, you can change them into a cahcecade array when you are editing the controller settings. You press F2 to select the type on the settings.
I should be able to do a hot add, right? Then configure from iDRAC? I would like to do it without taking down the server, if possible.
-
Yes, should be able to. iDrac should work or a software utility.
-
We use Edge SSD that we got form Xbytes and very ahppy with it. Sure SSD cost more but man they are fast. And like Scott said you can do RAID 5 with SSD so you get more storage than RAID 10 but still blow the doors of any spinning drives.
-
@vandis33 said:
We use Edge SSD that we got form Xbytes and very ahppy with it. Sure SSD cost more but man they are fast. And like Scott said you can do RAID 5 with SSD so you get more storage than RAID 10 but still blow the doors of any spinning drives.
I wish something like this existed for HP.