ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Hyper-V Failover Cluster FAILURE(S)

    IT Discussion
    6
    140
    13.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender
      last edited by

      6 hosts, failover cluster with local storage might be challenging, I don't really know anything about it.

      I'm sure @scottalanmiller can give some info.

      Depending on the age of the hosts, you might find yourself much better off with a two host setup with internal storage and something like StarWinds VSAN. 7 TB internal storage shouldn't be that hard to come by - though the performance needed might require some caching, etc.

      1 Reply Last reply Reply Quote 1
      • DashrenderD
        Dashrender @Kyle
        last edited by

        @kyle said in Hyper-V Failover Cluster FAILURE(S):

        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

        @kyle said in Hyper-V Failover Cluster FAILURE(S):

        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

        @kyle said in Hyper-V Failover Cluster FAILURE(S):

        @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

        Yes.

        Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

        I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

        Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

        Sure, this is a typical setup.

        KyleK 1 Reply Last reply Reply Quote 1
        • KyleK
          Kyle @Dashrender
          last edited by

          @dashrender said in Hyper-V Failover Cluster FAILURE(S):

          @kyle said in Hyper-V Failover Cluster FAILURE(S):

          @dashrender said in Hyper-V Failover Cluster FAILURE(S):

          @kyle said in Hyper-V Failover Cluster FAILURE(S):

          @dashrender said in Hyper-V Failover Cluster FAILURE(S):

          @kyle said in Hyper-V Failover Cluster FAILURE(S):

          @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

          Yes.

          Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

          I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

          Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

          Sure, this is a typical setup.

          I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

          We are a 24/7 operation and downtime is a huge no no.

          DashrenderD 1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @Kyle
            last edited by

            @kyle said in Hyper-V Failover Cluster FAILURE(S):

            @dashrender said in Hyper-V Failover Cluster FAILURE(S):

            @kyle said in Hyper-V Failover Cluster FAILURE(S):

            @dashrender said in Hyper-V Failover Cluster FAILURE(S):

            @kyle said in Hyper-V Failover Cluster FAILURE(S):

            @dashrender said in Hyper-V Failover Cluster FAILURE(S):

            @kyle said in Hyper-V Failover Cluster FAILURE(S):

            @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

            Yes.

            Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

            I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

            Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

            Sure, this is a typical setup.

            I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

            We are a 24/7 operation and downtime is a huge no no.

            If downtime is a no no, why separate the VMs?

            Do you have one or two SANs?

            KyleK 1 Reply Last reply Reply Quote 0
            • KyleK
              Kyle @Dashrender
              last edited by

              @dashrender said in Hyper-V Failover Cluster FAILURE(S):

              @kyle said in Hyper-V Failover Cluster FAILURE(S):

              @dashrender said in Hyper-V Failover Cluster FAILURE(S):

              @kyle said in Hyper-V Failover Cluster FAILURE(S):

              @dashrender said in Hyper-V Failover Cluster FAILURE(S):

              @kyle said in Hyper-V Failover Cluster FAILURE(S):

              @dashrender said in Hyper-V Failover Cluster FAILURE(S):

              @kyle said in Hyper-V Failover Cluster FAILURE(S):

              @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

              Yes.

              Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

              I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

              Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

              Sure, this is a typical setup.

              I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

              We are a 24/7 operation and downtime is a huge no no.

              If downtime is a no no, why separate the VMs?

              Do you have one or two SANs?

              We have 2 but the 2nd is for SQL data.

              DashrenderD 1 Reply Last reply Reply Quote 0
              • DashrenderD
                Dashrender @Kyle
                last edited by

                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                Yes.

                Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                Sure, this is a typical setup.

                I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                We are a 24/7 operation and downtime is a huge no no.

                If downtime is a no no, why separate the VMs?

                Do you have one or two SANs?

                We have 2 but the 2nd is for SQL data.

                So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                KyleK 1 Reply Last reply Reply Quote 1
                • KyleK
                  Kyle @Dashrender
                  last edited by

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                  Yes.

                  Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                  I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                  Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                  Sure, this is a typical setup.

                  I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                  We are a 24/7 operation and downtime is a huge no no.

                  If downtime is a no no, why separate the VMs?

                  Do you have one or two SANs?

                  We have 2 but the 2nd is for SQL data.

                  So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                  Exactly!

                  There's also a SQL server that's still running in bare metal.

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                  @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                  Yes.

                  Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                  I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                  Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                  Sure, this is a typical setup.

                  I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                  We are a 24/7 operation and downtime is a huge no no.

                  If downtime is a no no, why separate the VMs?

                  Do you have one or two SANs?

                  We have 2 but the 2nd is for SQL data.

                  So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                  Exactly! There's also a SQL server running in bare metal.

                  DashrenderD 1 Reply Last reply Reply Quote 0
                  • DashrenderD
                    Dashrender @Kyle
                    last edited by

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                    @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                    Yes.

                    Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                    I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                    Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                    Sure, this is a typical setup.

                    I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                    We are a 24/7 operation and downtime is a huge no no.

                    If downtime is a no no, why separate the VMs?

                    Do you have one or two SANs?

                    We have 2 but the 2nd is for SQL data.

                    So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                    Exactly!

                    There's also a SQL server that's still running in bare metal.

                    So you have at least 2 SQL servers? One bare metal and one VM?

                    KyleK 2 Replies Last reply Reply Quote 0
                    • KyleK
                      Kyle @Dashrender
                      last edited by

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                      @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                      Yes.

                      Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                      I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                      Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                      Sure, this is a typical setup.

                      I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                      We are a 24/7 operation and downtime is a huge no no.

                      If downtime is a no no, why separate the VMs?

                      Do you have one or two SANs?

                      We have 2 but the 2nd is for SQL data.

                      So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                      Exactly!

                      There's also a SQL server that's still running in bare metal.

                      So you have at least 2 SQL servers? One bare metal and one VM?

                      3 SQL servers. 2 VM 1 bare metal that is going to be migrated in the next month.

                      1 Reply Last reply Reply Quote 0
                      • KyleK
                        Kyle @Dashrender
                        last edited by

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @dashrender said in Hyper-V Failover Cluster FAILURE(S):

                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                        @scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.

                        Yes.

                        Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.

                        I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.

                        Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.

                        Sure, this is a typical setup.

                        I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.

                        We are a 24/7 operation and downtime is a huge no no.

                        If downtime is a no no, why separate the VMs?

                        Do you have one or two SANs?

                        We have 2 but the 2nd is for SQL data.

                        So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.

                        Exactly!

                        There's also a SQL server that's still running in bare metal.

                        So you have at least 2 SQL servers? One bare metal and one VM?

                        The entire environment is bad practice after bad practice.

                        1 Reply Last reply Reply Quote 1
                        • ObsolesceO
                          Obsolesce
                          last edited by

                          What are your end goals here exactly?

                          To just fix the error/main problem and be done?

                          To achieve true HA?

                          If not HA, then to actually set things up in a practical way that makes sense and is good for the business?

                          Host redundancy?

                          Network redundancy?

                          Host Storage / VM redundancy?

                          KyleK 1 Reply Last reply Reply Quote 0
                          • KyleK
                            Kyle @Obsolesce
                            last edited by

                            @tim_g said in Hyper-V Failover Cluster FAILURE(S):

                            What are your end goals here exactly?

                            To just fix the error/main problem and be done?

                            To achieve true HA?

                            If not HA, then to actually set things up in a practical way that makes sense and is good for the business?

                            Host redundancy?

                            Network redundancy?

                            Host Storage / VM redundancy?

                            Fix the I/O issue to start. Deprecating all the old bare metal is on the list but is taking time as we have to work with vendors and some upgrades are contingent on future upgrades that are due soon.

                            1 Reply Last reply Reply Quote 0
                            • KyleK
                              Kyle
                              last edited by

                              We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.

                              ObsolesceO scottalanmillerS 2 Replies Last reply Reply Quote 0
                              • ObsolesceO
                                Obsolesce @Kyle
                                last edited by

                                @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.

                                DFS? What exactly are you using DFS for in relation to the cluster?

                                KyleK 1 Reply Last reply Reply Quote 0
                                • KyleK
                                  Kyle @Obsolesce
                                  last edited by

                                  @tim_g said in Hyper-V Failover Cluster FAILURE(S):

                                  @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                  We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.

                                  DFS? What exactly are you using DFS for in relation to the cluster?

                                  Nothing as far as the Clustering goes. But that's not saying the MSP didn't change something else when they did the IP address changes on the DFS server after going from a /24 to a /16. I've read several things about Subnetting causing Auto Pause issues in a Hyper-V environment and 2 huge IP changes were made in the environment in a short amount of time.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @Kyle
                                    last edited by

                                    @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                    We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.

                                    But that never happened before?

                                    An issue here is that changing the networking means a lot of things were changed, not just the subnet mask size.

                                    KyleK 1 Reply Last reply Reply Quote 0
                                    • KyleK
                                      Kyle @scottalanmiller
                                      last edited by

                                      @scottalanmiller said in Hyper-V Failover Cluster FAILURE(S):

                                      @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                      We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.

                                      But that never happened before?

                                      An issue here is that changing the networking means a lot of things were changed, not just the subnet mask size.

                                      That's another issue too. Some things that recieved the new /16 addresses still carry the 255.255.255.0 instead of the 255.255.0.0 and they said it didn't matter when I questioned them about it.

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @Kyle
                                        last edited by

                                        @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                        Some things that recieved the new /16 addresses still carry the 255.255.255.0 instead of the 255.255.0.0 and they said it didn't matter when I questioned them about it.

                                        Um, that means they are NOT /16. 255.255.0.0 and /16 are the same thing, just two different ways to write it. It means that they didn't do the /16 as they said, and they knew it, and they lied about not needing to do it. It's true that at times you can have half broken smaller networks inside of larger ones, but they are broken and not all of your networking will work when it needs to.

                                        So don't say it that they received /16 addressing, because they did not, they are /24 on a /24 network that is broken and can only communicate with a small fraction of the /16.

                                        That's just broken, so that might easily be the issue.

                                        KyleK 1 Reply Last reply Reply Quote 0
                                        • KyleK
                                          Kyle @scottalanmiller
                                          last edited by

                                          @scottalanmiller said in Hyper-V Failover Cluster FAILURE(S):

                                          @kyle said in Hyper-V Failover Cluster FAILURE(S):

                                          Some things that recieved the new /16 addresses still carry the 255.255.255.0 instead of the 255.255.0.0 and they said it didn't matter when I questioned them about it.

                                          Um, that means they are NOT /16. 255.255.0.0 and /16 are the same thing, just two different ways to write it. It means that they didn't do the /16 as they said, and they knew it, and they lied about not needing to do it. It's true that at times you can have half broken smaller networks inside of larger ones, but they are broken and not all of your networking will work when it needs to.

                                          So don't say it that they received /16 addressing, because they did not, they are /24 on a /24 network that is broken and can only communicate with a small fraction of the /16.

                                          That's just broken, so that might easily be the issue.

                                          I know. But being the FNG I'm not allowed to make changes to anything. I'm only allowed to view and make suggestions that have to be approved. The SAN also point to 192.168.x.x DNS addresses which I believe can be causing issues as well.

                                          1 Reply Last reply Reply Quote 0
                                          • KyleK
                                            Kyle
                                            last edited by

                                            @scottalanmiller The SAN's two 10G connections, should those be on the same subnet for best practices?

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 2 / 7
                                            • First post
                                              Last post