ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    When is SSD a MUST HAVE for server? thoughts? Discussion :D

    IT Discussion
    storage ssd
    13
    84
    22.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

      BRRABillB 1 Reply Last reply Reply Quote 1
      • BRRABillB
        BRRABill @scottalanmiller
        last edited by

        @scottalanmiller said:

        The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

        My IOPS on the EDGE SSDs from the other day were
        Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
        Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

        DustinB3403D MattSpellerM 2 Replies Last reply Reply Quote 1
        • DustinB3403D
          DustinB3403 @BRRABill
          last edited by DustinB3403

          @BRRABill said:

          @scottalanmiller said:

          The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

          My IOPS on the EDGE SSDs from the other day were
          Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
          Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

          So, stupidly faster than what you were used to?

          BRRABillB 1 Reply Last reply Reply Quote 0
          • BRRABillB
            BRRABill @DustinB3403
            last edited by

            @DustinB3403 said:

            So, stupidly faster than what you were used to?

            Oh yeah.

            My numbers from the regular drives in there was all over the place, but probably pretty normal.
            I posted them in this thread if anyone is interested:
            http://www.mangolassi.it/topic/7458/swapping-drive-to-another-raid-controller/2
            I posted different drives and also differenrt PERC cards.
            The results don't make 100% sense to me.

            I've never tested the 10 year old servers I am currently using. That would be interesting.

            1 Reply Last reply Reply Quote 0
            • MattSpellerM
              MattSpeller @BRRABill
              last edited by MattSpeller

              @BRRABill said:

              @scottalanmiller said:

              The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.

              My IOPS on the EDGE SSDs from the other day were
              Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
              Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]

              Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

              I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

              BRRABillB 1 Reply Last reply Reply Quote 0
              • BRRABillB
                BRRABill @MattSpeller
                last edited by BRRABill

                @MattSpeller said:

                Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

                I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

                No.

                I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

                Later today I will repost under a separate topic, I think.

                MattSpellerM 1 Reply Last reply Reply Quote 2
                • MattSpellerM
                  MattSpeller @BRRABill
                  last edited by MattSpeller

                  @BRRABill said:

                  @MattSpeller said:

                  Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?

                  I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.

                  No.

                  I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.

                  Later today I will repost under a separate topic, I think.

                  Please do, I'll share some results with a rust array for comparison if that's helpful

                  1 Reply Last reply Reply Quote 0
                  • ardeynA
                    ardeyn
                    last edited by

                    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                    MattSpellerM 1 Reply Last reply Reply Quote 3
                    • MattSpellerM
                      MattSpeller @ardeyn
                      last edited by

                      @ardeyn said:

                      There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                      Excellent point, but very dependant on if you've got a controller that supports it

                      scottalanmillerS 1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @MattSpeller
                        last edited by

                        @MattSpeller said:

                        @ardeyn said:

                        There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                        Excellent point, but very dependant on if you've got a controller that supports it

                        Or software. Lots of people doing it in software too.

                        MattSpellerM DashrenderD 2 Replies Last reply Reply Quote 1
                        • MattSpellerM
                          MattSpeller @scottalanmiller
                          last edited by

                          @scottalanmiller said:

                          @MattSpeller said:

                          @ardeyn said:

                          There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                          Excellent point, but very dependant on if you've got a controller that supports it

                          Or software. Lots of people doing it in software too.

                          I thought of that a milisecond after I hit submit heheh

                          At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.

                          DustinB3403D 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            Definitely a topic for another thread, but mostly it comes down to the use case. Way better to have it on the controller for a lot of reasons, but more flexible in software. But if you don't have software that supports it, you are screwed.

                            L 1 Reply Last reply Reply Quote 2
                            • DustinB3403D
                              DustinB3403 @MattSpeller
                              last edited by

                              @MattSpeller said:

                              @scottalanmiller said:

                              @MattSpeller said:

                              @ardeyn said:

                              There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                              Excellent point, but very dependant on if you've got a controller that supports it

                              Or software. Lots of people doing it in software too.

                              I thought of that a milisecond after I hit submit heheh

                              At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.

                              I think the point in which you are considering dumping hardware raid controllers is at the point that you can run your business from backup power, without interruption.

                              I'd say if you have a power system so robust that your norm is "software raid" then you shouldn't even be wasting money on a hardware raid controller.

                              1 Reply Last reply Reply Quote 2
                              • L
                                LAH3385 @scottalanmiller
                                last edited by

                                @scottalanmiller

                                @MattSpeller

                                If you are opening a new thread can you link me to it. I would love to get involve

                                1 Reply Last reply Reply Quote 3
                                • DashrenderD
                                  Dashrender @scottalanmiller
                                  last edited by

                                  @scottalanmiller said:

                                  @MattSpeller said:

                                  @ardeyn said:

                                  There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                  Excellent point, but very dependant on if you've got a controller that supports it

                                  Or software. Lots of people doing it in software too.

                                  Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

                                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                                  • scottalanmillerS
                                    scottalanmiller @Dashrender
                                    last edited by

                                    @Dashrender said:

                                    @scottalanmiller said:

                                    @MattSpeller said:

                                    @ardeyn said:

                                    There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                    Excellent point, but very dependant on if you've got a controller that supports it

                                    Or software. Lots of people doing it in software too.

                                    Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

                                    To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.

                                    That's the miracle of the block device interface system.

                                    DashrenderD 1 Reply Last reply Reply Quote 1
                                    • DashrenderD
                                      Dashrender @scottalanmiller
                                      last edited by

                                      @scottalanmiller said:

                                      @Dashrender said:

                                      @scottalanmiller said:

                                      @MattSpeller said:

                                      @ardeyn said:

                                      There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                      Excellent point, but very dependant on if you've got a controller that supports it

                                      Or software. Lots of people doing it in software too.

                                      Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

                                      To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.

                                      That's the miracle of the block device interface system.

                                      Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.

                                      scottalanmillerS 1 Reply Last reply Reply Quote 1
                                      • scottalanmillerS
                                        scottalanmiller @Dashrender
                                        last edited by

                                        @Dashrender said:

                                        @scottalanmiller said:

                                        @Dashrender said:

                                        @scottalanmiller said:

                                        @MattSpeller said:

                                        @ardeyn said:

                                        There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                        Excellent point, but very dependant on if you've got a controller that supports it

                                        Or software. Lots of people doing it in software too.

                                        Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

                                        To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.

                                        That's the miracle of the block device interface system.

                                        Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.

                                        Nope, sorry 🙂

                                        I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.

                                        DashrenderD 1 Reply Last reply Reply Quote 1
                                        • DashrenderD
                                          Dashrender @scottalanmiller
                                          last edited by

                                          @scottalanmiller said:

                                          @Dashrender said:

                                          @scottalanmiller said:

                                          @Dashrender said:

                                          @scottalanmiller said:

                                          @MattSpeller said:

                                          @ardeyn said:

                                          There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.

                                          Excellent point, but very dependant on if you've got a controller that supports it

                                          Or software. Lots of people doing it in software too.

                                          Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)

                                          To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.

                                          That's the miracle of the block device interface system.

                                          Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.

                                          Nope, sorry 🙂

                                          I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.

                                          OK that makes sense.

                                          Does Hyper-V, ESXi support this? I'm guessing that XS and KVM do, they can use ZFS for their file system of the VM storage (I'm assuming).

                                          1 Reply Last reply Reply Quote 2
                                          • scottalanmillerS
                                            scottalanmiller
                                            last edited by

                                            They all do to some degree, but all very differently.

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 2 / 5
                                            • First post
                                              Last post