When is SSD a MUST HAVE for server? thoughts? Discussion :D



  • [I feel like this has been asked before, so Sorry! in advance]

    This is kind of vaguely questioned. When do you, admins, think SSD is a MUST HAVE for server? I know IOPS would be the deciding factor, but I want to know where is the line of Good Enough and Over Kill is.

    [Back story] This question popped in my head while I was taking a shower 😛 . To me I would give EVERY workstation SSDs for no obvious reason... just cause. SSD improve boot speed and other loading speed + it make users complaint less. Win-Win for me and users.
    But what about servers? Our File Server utilizing SATA III at the moment and we do not see any poor performance, except network bandwidth. Our IOPS is quite low, less than 100. I wanted to go with enterprise grade SSD for Dell PowerEdge but we just couldn't afford it. We'll need 3-4TB total per server (2 servers total) $$$.

    What is your thoughts on this? Is SSD the way of the future for server? Is manufacturer certified enterprise SSD the only option? I saw some use enthusiast SSD like Samsung Pro or Kingston Enterprise.
    ps. If you have any idea how to deal with network bandwidth leave the comment below. I have 48port Cisco Gigabit switch. It's couple of years old but still kicking. Any recommendations for replacement switch are welcome as well.

    THANKS for reading! 😃



  • I used the EDGE SSDs from xByte in my latest server. That would help with your cost issue.

    My thinking was, yeah, it's overkill, but it wasn't much more, and it'll make the system speedier longer.

    Of course, had I been buying 1000 of these instead of 1, I would have thought differently, probably.



  • @BRRABill said:

    I used the EDGE SSDs from xByte in my latest server. That would help with your cost issue.

    My thinking was, yeah, it's overkill, but it wasn't much more, and it'll make the system speedier longer.

    Of course, had I been buying 1000 of these instead of 1, I would have thought differently, probably.

    I already got a quote from xByte. To do Raid 10 with 2TB I'll need 8 drives of 480GB... totaled ~$3K per server.. or almost $6K total 😞
    Kingson SSD is half the price.. just that it may not work with Dell Raid Controller.



  • @LAH3385 said:

    I already got a quote from xByte. To do Raid 10 with 2TB I'll need 8 drives of 480GB... totaled ~$3K per server.. or almost $6K total 😞
    Kingson SSD is half the price.. just that it may not work with Dell Raid Controller.

    We tried the Kingston drives, with the help of Kingston. They did not work for us. I mean, they WORKED, but flashed Amber.

    They'll work with you if you want to demo drives. We ended up keeping the test drives for use in other machines.



  • @LAH3385 said:

    @BRRABill said:

    I used the EDGE SSDs from xByte in my latest server. That would help with your cost issue.

    My thinking was, yeah, it's overkill, but it wasn't much more, and it'll make the system speedier longer.

    Of course, had I been buying 1000 of these instead of 1, I would have thought differently, probably.

    I already got a quote from xByte. To do Raid 10 with 2TB I'll need 8 drives of 480GB... totaled ~$3K per server.. or almost $6K total 😞
    Kingson SSD is half the price.. just that it may not work with Dell Raid Controller.

    With SSD you would normally do RAID 5. Cheaper and generally still more reliable than RAID 10 with Winchesters (spinners.)



  • It's all about performance and cost. SSDs cost more per GB and less per IOPS. All depends on what you want from your server. In a desktop the speed difference is huge and you barely see a difference on price and the change in maintenance pays for it alone.

    In servers we often have to deal with massive storage amounts and SSDs are often unaffordable. But at the same time, servers often have to do things very quickly for many users making speed important. It all depends on how the server is used. There is no handy answer.



  • Here is a quick guide, however:

    • File Servers: Currently almost always Winchesters because capacity is what matters.
    • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
    • Database Servers: Almost always SSDs because IOPS matter and little else.
    • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.


  • @scottalanmiller said:

    Here is a quick guide, however:

    • File Servers: Currently almost always Winchesters because capacity is what matters.
    • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
    • Database Servers: Almost always SSDs because IOPS matter and little else.
    • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

    I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.



  • technically the answer is NEVER. it's never a must. if it were....



  • @LAH3385 said:

    @scottalanmiller said:

    Here is a quick guide, however:

    • File Servers: Currently almost always Winchesters because capacity is what matters.
    • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
    • Database Servers: Almost always SSDs because IOPS matter and little else.
    • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

    I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.

    How would that fall under VDI? You said it was a file server, it would be a file server.



  • @scottalanmiller said:

    @LAH3385 said:

    @scottalanmiller said:

    Here is a quick guide, however:

    • File Servers: Currently almost always Winchesters because capacity is what matters.
    • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
    • Database Servers: Almost always SSDs because IOPS matter and little else.
    • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

    I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.

    How would that fall under VDI? You said it was a file server, it would be a file server.

    Yeah. My bad. Just read more about VDI and it doesn't apply to us



  • Cost of SSD
    Current IOPS held back by spinning rust
    Future IOPS requirements
    Supporting hardware (RAID controller upgrade? 3.5" to 2.5" adapters?)

    Add all that up, so to speak. Then subtract the cost of a whizzing rust array. If cost <= benefit, purchase.



  • typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.



  • @Dashrender said:

    typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.

    And by typical, he means "any we've ever heard of."



  • The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.



  • @scottalanmiller said:

    The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

    My IOPS on the EDGE SSDs from the other day were
    Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]



  • @BRRABill said:

    @scottalanmiller said:

    The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

    My IOPS on the EDGE SSDs from the other day were
    Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

    So, stupidly faster than what you were used to?



  • @DustinB3403 said:

    So, stupidly faster than what you were used to?

    Oh yeah.

    My numbers from the regular drives in there was all over the place, but probably pretty normal.
    I posted them in this thread if anyone is interested:
    http://www.mangolassi.it/topic/7458/swapping-drive-to-another-raid-controller/2
    I posted different drives and also differenrt PERC cards.
    The results don't make 100% sense to me.

    I've never tested the 10 year old servers I am currently using. That would be interesting.



  • @BRRABill said:

    @scottalanmiller said:

    The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

    My IOPS on the EDGE SSDs from the other day were
    Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

    Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

    I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.



  • @MattSpeller said:

    Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

    I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

    No.

    I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

    Later today I will repost under a separate topic, I think.



  • @BRRABill said:

    @MattSpeller said:

    Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

    I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

    No.

    I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

    Later today I will repost under a separate topic, I think.

    Please do, I'll share some results with a rust array for comparison if that's helpful



  • There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.



  • @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it



  • @MattSpeller said:

    @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it

    Or software. Lots of people doing it in software too.



  • @scottalanmiller said:

    @MattSpeller said:

    @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it

    Or software. Lots of people doing it in software too.

    I thought of that a milisecond after I hit submit heheh

    At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.



  • Definitely a topic for another thread, but mostly it comes down to the use case. Way better to have it on the controller for a lot of reasons, but more flexible in software. But if you don't have software that supports it, you are screwed.



  • @MattSpeller said:

    @scottalanmiller said:

    @MattSpeller said:

    @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it

    Or software. Lots of people doing it in software too.

    I thought of that a milisecond after I hit submit heheh

    At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.

    I think the point in which you are considering dumping hardware raid controllers is at the point that you can run your business from backup power, without interruption.

    I'd say if you have a power system so robust that your norm is "software raid" then you shouldn't even be wasting money on a hardware raid controller.



  • @scottalanmiller

    @MattSpeller

    If you are opening a new thread can you link me to it. I would love to get involve



  • @scottalanmiller said:

    @MattSpeller said:

    @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it

    Or software. Lots of people doing it in software too.

    Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)



  • @Dashrender said:

    @scottalanmiller said:

    @MattSpeller said:

    @ardeyn said:

    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

    Excellent point, but very dependant on if you've got a controller that supports it

    Or software. Lots of people doing it in software too.

    Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

    To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.

    That's the miracle of the block device interface system.


Log in to reply