ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    When is SSD a MUST HAVE for server? thoughts? Discussion :D

    Scheduled Pinned Locked Moved IT Discussion
    storagessd
    84 Posts 13 Posters 23.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      It's all about performance and cost. SSDs cost more per GB and less per IOPS. All depends on what you want from your server. In a desktop the speed difference is huge and you barely see a difference on price and the change in maintenance pays for it alone.

      In servers we often have to deal with massive storage amounts and SSDs are often unaffordable. But at the same time, servers often have to do things very quickly for many users making speed important. It all depends on how the server is used. There is no handy answer.

      1 Reply Last reply Reply Quote 2
      • scottalanmillerS
        scottalanmiller
        last edited by

        Here is a quick guide, however:

        • File Servers: Currently almost always Winchesters because capacity is what matters.
        • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
        • Database Servers: Almost always SSDs because IOPS matter and little else.
        • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.
        L 1 Reply Last reply Reply Quote 2
        • L
          LAH3385 @scottalanmiller
          last edited by

          @scottalanmiller said:

          Here is a quick guide, however:

          • File Servers: Currently almost always Winchesters because capacity is what matters.
          • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
          • Database Servers: Almost always SSDs because IOPS matter and little else.
          • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

          I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.

          scottalanmillerS 1 Reply Last reply Reply Quote 0
          • H
            hubtechagain
            last edited by

            technically the answer is NEVER. it's never a must. if it were....

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @LAH3385
              last edited by

              @LAH3385 said:

              @scottalanmiller said:

              Here is a quick guide, however:

              • File Servers: Currently almost always Winchesters because capacity is what matters.
              • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
              • Database Servers: Almost always SSDs because IOPS matter and little else.
              • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

              I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.

              How would that fall under VDI? You said it was a file server, it would be a file server.

              L 1 Reply Last reply Reply Quote 1
              • L
                LAH3385 @scottalanmiller
                last edited by

                @scottalanmiller said:

                @LAH3385 said:

                @scottalanmiller said:

                Here is a quick guide, however:

                • File Servers: Currently almost always Winchesters because capacity is what matters.
                • App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
                • Database Servers: Almost always SSDs because IOPS matter and little else.
                • Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.

                I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.

                How would that fall under VDI? You said it was a file server, it would be a file server.

                Yeah. My bad. Just read more about VDI and it doesn't apply to us

                1 Reply Last reply Reply Quote 0
                • MattSpellerM
                  MattSpeller
                  last edited by MattSpeller

                  Cost of SSD
                  Current IOPS held back by spinning rust
                  Future IOPS requirements
                  Supporting hardware (RAID controller upgrade? 3.5" to 2.5" adapters?)

                  Add all that up, so to speak. Then subtract the cost of a whizzing rust array. If cost <= benefit, purchase.

                  1 Reply Last reply Reply Quote 1
                  • DashrenderD
                    Dashrender
                    last edited by

                    typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @Dashrender
                      last edited by

                      @Dashrender said:

                      typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.

                      And by typical, he means "any we've ever heard of."

                      1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

                        BRRABillB 1 Reply Last reply Reply Quote 1
                        • BRRABillB
                          BRRABill @scottalanmiller
                          last edited by

                          @scottalanmiller said:

                          The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

                          My IOPS on the EDGE SSDs from the other day were
                          Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
                          Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

                          DustinB3403D MattSpellerM 2 Replies Last reply Reply Quote 1
                          • DustinB3403D
                            DustinB3403 @BRRABill
                            last edited by DustinB3403

                            @BRRABill said:

                            @scottalanmiller said:

                            The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

                            My IOPS on the EDGE SSDs from the other day were
                            Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
                            Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

                            So, stupidly faster than what you were used to?

                            BRRABillB 1 Reply Last reply Reply Quote 0
                            • BRRABillB
                              BRRABill @DustinB3403
                              last edited by

                              @DustinB3403 said:

                              So, stupidly faster than what you were used to?

                              Oh yeah.

                              My numbers from the regular drives in there was all over the place, but probably pretty normal.
                              I posted them in this thread if anyone is interested:
                              http://www.mangolassi.it/topic/7458/swapping-drive-to-another-raid-controller/2
                              I posted different drives and also differenrt PERC cards.
                              The results don't make 100% sense to me.

                              I've never tested the 10 year old servers I am currently using. That would be interesting.

                              1 Reply Last reply Reply Quote 0
                              • MattSpellerM
                                MattSpeller @BRRABill
                                last edited by MattSpeller

                                @BRRABill said:

                                @scottalanmiller said:

                                The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

                                My IOPS on the EDGE SSDs from the other day were
                                Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
                                Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

                                Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

                                I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

                                BRRABillB 1 Reply Last reply Reply Quote 0
                                • BRRABillB
                                  BRRABill @MattSpeller
                                  last edited by BRRABill

                                  @MattSpeller said:

                                  Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

                                  I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

                                  No.

                                  I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

                                  Later today I will repost under a separate topic, I think.

                                  MattSpellerM 1 Reply Last reply Reply Quote 2
                                  • MattSpellerM
                                    MattSpeller @BRRABill
                                    last edited by MattSpeller

                                    @BRRABill said:

                                    @MattSpeller said:

                                    Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

                                    I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

                                    No.

                                    I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

                                    Later today I will repost under a separate topic, I think.

                                    Please do, I'll share some results with a rust array for comparison if that's helpful

                                    1 Reply Last reply Reply Quote 0
                                    • ardeynA
                                      ardeyn
                                      last edited by

                                      There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                      MattSpellerM 1 Reply Last reply Reply Quote 3
                                      • MattSpellerM
                                        MattSpeller @ardeyn
                                        last edited by

                                        @ardeyn said:

                                        There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                        Excellent point, but very dependant on if you've got a controller that supports it

                                        scottalanmillerS 1 Reply Last reply Reply Quote 1
                                        • scottalanmillerS
                                          scottalanmiller @MattSpeller
                                          last edited by

                                          @MattSpeller said:

                                          @ardeyn said:

                                          There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                          Excellent point, but very dependant on if you've got a controller that supports it

                                          Or software. Lots of people doing it in software too.

                                          MattSpellerM DashrenderD 2 Replies Last reply Reply Quote 1
                                          • MattSpellerM
                                            MattSpeller @scottalanmiller
                                            last edited by

                                            @scottalanmiller said:

                                            @MattSpeller said:

                                            @ardeyn said:

                                            There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                            Excellent point, but very dependant on if you've got a controller that supports it

                                            Or software. Lots of people doing it in software too.

                                            I thought of that a milisecond after I hit submit heheh

                                            At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.

                                            DustinB3403D 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 1 / 5
                                            • First post
                                              Last post