ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Food for thought: Fixing an over-engineered environment

    IT Discussion
    design server consolidation virtualization hyper-v storage backup
    9
    91
    7.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @Dashrender
      last edited by

      @dashrender said in Food for thought: Fixing an over-engineered environment:

      If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

      Where "might be" = "long past due."

      1 Reply Last reply Reply Quote 1
      • scottalanmillerS
        scottalanmiller @Dashrender
        last edited by

        @dashrender said in Food for thought: Fixing an over-engineered environment:

        @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

        That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?

        Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.

        Wouldn't this be 4 max per vNetwork in the VM host?

        Correct, if the connects are independent, you get to do another four.

        1 Reply Last reply Reply Quote 0
        • EddieJenningsE
          EddieJennings @Dashrender
          last edited by

          @dashrender said in Food for thought: Fixing an over-engineered environment:

          If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

          I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

          scottalanmillerS DashrenderD 2 Replies Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @EddieJennings
            last edited by

            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

            @dashrender said in Food for thought: Fixing an over-engineered environment:

            If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

            I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

            That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.

            It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.

            EddieJenningsE 1 Reply Last reply Reply Quote 2
            • EddieJenningsE
              EddieJennings @scottalanmiller
              last edited by

              @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

              @dashrender said in Food for thought: Fixing an over-engineered environment:

              If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

              I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

              That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.

              It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.

              That makes sense. Performance was a poor choice of words.

              1 Reply Last reply Reply Quote 0
              • JaredBuschJ
                JaredBusch
                last edited by

                I never use IPMI.

                DashrenderD EddieJenningsE 2 Replies Last reply Reply Quote 0
                • DashrenderD
                  Dashrender @EddieJennings
                  last edited by

                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                  @dashrender said in Food for thought: Fixing an over-engineered environment:

                  If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                  I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

                  This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.

                  If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.

                  EddieJenningsE 1 Reply Last reply Reply Quote 1
                  • EddieJenningsE
                    EddieJennings @Dashrender
                    last edited by

                    @dashrender

                    This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.

                    If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.

                    The whole situation is a waste of resources. I'm looking to see how to best utilize them.

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @JaredBusch
                      last edited by

                      @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                      I never use IPMI.

                      @JaredBusch thought IPMI was something special for Hyper-V, not that you were talking about the iDRAC like interface - he stands corrected and uses the iDRAC like interface as much as he can.

                      1 Reply Last reply Reply Quote 0
                      • DashrenderD
                        Dashrender @EddieJennings
                        last edited by

                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                        @dashrender

                        This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.

                        If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.

                        The whole situation is a waste of resources. I'm looking to see how to best utilize them.

                        Right, so for this part, the best would likely be two 1 Gb (on board) NICs in a team.

                        1 Reply Last reply Reply Quote 0
                        • EddieJenningsE
                          EddieJennings @JaredBusch
                          last edited by

                          @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                          I never use IPMI.

                          I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                          scottalanmillerS DashrenderD 2 Replies Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @EddieJennings
                            last edited by

                            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                            @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                            I never use IPMI.

                            I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                            I've had very good luck with it.

                            1 Reply Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender @EddieJennings
                              last edited by

                              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                              @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                              I never use IPMI.

                              I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                              What doesn't it give you that you want?

                              EddieJenningsE 1 Reply Last reply Reply Quote 0
                              • EddieJenningsE
                                EddieJennings @Dashrender
                                last edited by

                                @dashrender said in Food for thought: Fixing an over-engineered environment:

                                @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                                I never use IPMI.

                                I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                                What doesn't it give you that you want?

                                I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.

                                scottalanmillerS DashrenderD 3 Replies Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @EddieJennings
                                  last edited by

                                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                  @dashrender said in Food for thought: Fixing an over-engineered environment:

                                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                  @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                                  I never use IPMI.

                                  I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                                  What doesn't it give you that you want?

                                  I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.

                                  It is not, since RAID is not part of the hardware that the IPMI sees.

                                  EddieJenningsE 1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @EddieJennings
                                    last edited by

                                    @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                    I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console.

                                    IPMI is a protocol, if the issue is that you don't like specific tools for it, that's a tooling issue.

                                    1 Reply Last reply Reply Quote 0
                                    • EddieJenningsE
                                      EddieJennings @scottalanmiller
                                      last edited by

                                      @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                                      @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                      @dashrender said in Food for thought: Fixing an over-engineered environment:

                                      @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                      @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                                      I never use IPMI.

                                      I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                                      What doesn't it give you that you want?

                                      I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.

                                      It is not, since RAID is not part of the hardware that the IPMI sees.

                                      That's what I figured.

                                      1 Reply Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender @EddieJennings
                                        last edited by

                                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                        @dashrender said in Food for thought: Fixing an over-engineered environment:

                                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                        @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                                        I never use IPMI.

                                        I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:

                                        What doesn't it give you that you want?

                                        I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.

                                        Aww - yeah I have no idea if that's a reasonable want or not. I've always just used a vendor supplied app inside Windows to see the status of the RAID controller. Of course with virtualization, I haven't dug into how that works, connecting directly to the hardware, I assume via some something in the hypervisor, etc.

                                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                                        • JaredBuschJ
                                          JaredBusch
                                          last edited by JaredBusch

                                          On Dell servers, the iDRAC does show the RAID controller status, as long as you use their PERC cards, but that is designed into the ecosystem.

                                          But I do not use iDRAC as a goto. I use the Dell OMSA installed into Hyper-V Server as my daily driver tool.

                                          1 Reply Last reply Reply Quote 3
                                          • EddieJenningsE
                                            EddieJennings
                                            last edited by

                                            I'm finally getting to the point where I'm planning how all of this will work. This is the end goal for the server hardware.

                                            Hyper-V Host 1 (former physical Server 2 - SQL Server)

                                            • Will be running the new production VMs
                                            • Six S3500 SSDs configured in RAID 5. Two SSDs will be taken from former Server 1, and two SSDs taken from former Server 3.

                                            Hyper-V Host 2 (former physical Server 1 - IIS server)

                                            • Will be running most likely Veeam VM and storing backups
                                            • Four Seagate Enterprise 4 TB HDDs in RAID 10. -- currently reviewing storage needs for backups, so this could change

                                            Hyper-V Host 3 (former physical Server 3 - the Yosemite backup server, Redis, and host for PostFix VM)

                                            • Purpose to be determined
                                              *Four S3700 SSDs configured in Raid 5. These SSDs will be taken from former Server 2.

                                            Since I'll be swapping hard drives between servers, there's going to be downtime, so I'm thinking through how I can reduce how much downtime there will be. The below plan isn't set in stone, but rather just ideas.

                                            I would start by copying the data used by the IIS virtual folders to an external device. Once that initial copy is done, I would take the production systems offline. I would take a backup of SQL server and copy it to the external device, as well as copy whatever files are new and have changed with the IIS virtual folders (I love robocopy.)

                                            Next, I would do all of the disk swapping from above, install and patch Hyper-V on each of the systems, and configure networking. Then I would create the production VMs, configure the servers and patch, copy the data from the external storage, and restore the SQL server backup.

                                            There are probably better ways of doing this. Articulating the above helps my thought process. I had a text document with another plan, which I subsequently deleted, for as I was writing and thinking, I realized how flawed it was.

                                            1 Reply Last reply Reply Quote 2
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 4 / 5
                                            • First post
                                              Last post