ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Nesting Hypervisors - when would you do this?

    Scheduled Pinned Locked Moved IT Discussion
    32 Posts 11 Posters 5.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender @Jason
      last edited by

      @Jason said:

      @Dashrender said:

      @Jason said:

      @Dashrender said:

      @Jason said:

      @DustinB3403 said:

      I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).

      IE XenServer with several VM's running your other Hypervisors.

      Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.

      Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.

      Is this because you don't want to separate it at the hardware level?

      Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.

      You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?

      Not sure where you get the have layers of abstraction, you can access the hyper visor from a VM on itself and therefore get to any other VM on the hypervisor. Requires some knowledge to do it but it can be done.

      Oh?

      1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller
        last edited by

        I think that it makes sense for a lab environment. It's not for production testing, but for learning and stuff it is fine.

        For production, I can think of only one case... when you have to trick a vendor.

        Vendor A says "We only support that on hypervisor X" and Vendor B only supports hypervisor Y. So you run your production environment and give each vendor what they ask for. If they say "Sorry, no support for you, we only support X", you put them on X and they can't say anything.

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @DustinB3403
          last edited by

          @DustinB3403 said:

          @Dashrender but it would only be a partial test at best.

          You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.

          You could disable the NIC.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @BRRABill
            last edited by

            @BRRABill said:

            What about installing the hypervisor in the cloud? Like if you wanted to test the copy/replication of something like XS to one of the VPS cloud providers? (Vultr accepts ISO uploads, right?)

            You'd be installing on top of their hypervisor, right?

            Theoretically this works. I don't know of anyone that allows it. And cloud systems rarely (ever?) get multiple IPs, so this gets rather difficult to actually use.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @BRRABill
              last edited by

              @BRRABill said:

              @Dashrender said:

              I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.

              If Vultr accepts an ISO it would also support this, right?

              No relationship.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @stacksofplates
                last edited by

                @johnhooks said:

                Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.

                This is more common but is what I'd call semi-nesting. Those aren't VMs so it isn't VM on VM, but something akin to a VM on a VM.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @Dashrender
                  last edited by

                  @Dashrender said:

                  @johnhooks said:

                  Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.

                  Those are designed to work like that though, aren't they? The containerization doesn't need to know the full spec of the CPU for example, where as a full hypervisor does.

                  Some are, some are not. If the technology requires hardware virtualization assistance then you still need nesting support.

                  1 Reply Last reply Reply Quote 0
                  • dafyreD
                    dafyre
                    last edited by

                    I have actually tested this. When I first started here, they wanted me to build a VDI farm, but they didn't have any hardware for me to do it on. I built my POC with Hyper-V in VMware.

                    There are some security options that may have to be disabled on the VMware vSwitch, as well as the physical switch related to Mac Spoofing.

                    I was surprised, because it was actually not terribly awful performance, until you tried to do things like Paint, or run a web browser with hardware acceleration enabled, lol.

                    If they actually had graphics processors on these VMware servers, it likely would have worked just fine.

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @dafyre
                      last edited by

                      @dafyre said:

                      If they actually had graphics processors on these VMware servers, it likely would have worked just fine.

                      Something like 30% of Amazon's cloud does now.

                      1 Reply Last reply Reply Quote 1
                      • DenisKelleyD
                        DenisKelley
                        last edited by

                        I've actually been using a nested hypervisor for over a month now as I run through the MS 40-710 training. I've got it running on a Dell T110 that currently is running ESXi 5.1 and nested on top of that is 2012R2 Servers and Hyper-V Servers so I can learn in a lab environment. Using Pluralsight videos and can play along with the VMs as I go through the training.

                        1 Reply Last reply Reply Quote 0
                        • bbigfordB
                          bbigford
                          last edited by

                          Strictly for my own curiosity... Lab obviously it's cool to see what you can do so I would say there it wouldn't matter.

                          In production, why would you want to run different platforms? Why not go all Xen or all Hyper-V so things are more consistent?

                          1 Reply Last reply Reply Quote 1
                          • NicN
                            Nic
                            last edited by

                            Youtube Video

                            1 Reply Last reply Reply Quote 3
                            • 1
                            • 2
                            • 2 / 2
                            • First post
                              Last post