Nesting Hypervisors - when would you do this?
-
@Dashrender said:
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
If Vultr accepts an ISO it would also support this, right?
-
@DustinB3403 said:
@Dashrender but it would only be a partial test at best.
You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.
What? A faulty cord? what does that mean? If you're simply talking about having one of the VM hosts fall off the network we found that option this morning in XO per your request.
If you're talking about an intermittent cable - I suppose one could test this, but I've never heard of a test like that.
The same goes with power loss - you can simply kill the VM to simulate power loss to one VM as if a power plug was pulled.
Let's see what other physical situations do we need to worry about? I suppose if you're using shared storage - that would be another VM inside the system, at the same level as the first level nested VMs - again, just kill the VM, or kill the network connect to that VM - both do the same thing as far as the failover situation is considered.
Am I missing a physical situation that can't be simulated?
-
@BRRABill said:
@Dashrender said:
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
If Vultr accepts an ISO it would also support this, right?
Not necessarily. If the hypervisor they are using doesn't supported Nested hypervisor, then no, they wouldn't support it.
-
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
-
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
-
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
This is the same way you get a hosted cloud. If you buy hosted ESXi, Hyper-V etc, etc. It's all Nested.
-
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
-
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
-
@johnhooks said:
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
Those are designed to work like that though, aren't they? The containerization doesn't need to know the full spec of the CPU for example, where as a full hypervisor does.
-
@Dashrender said:
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
Not sure where you get the have layers of abstraction, you can access the hyper visor from a VM on itself and therefore get to any other VM on the hypervisor. Requires some knowledge to do it but it can be done.
-
@Jason said:
@Dashrender said:
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
Not sure where you get the have layers of abstraction, you can access the hyper visor from a VM on itself and therefore get to any other VM on the hypervisor. Requires some knowledge to do it but it can be done.
Oh?
-
I think that it makes sense for a lab environment. It's not for production testing, but for learning and stuff it is fine.
For production, I can think of only one case... when you have to trick a vendor.
Vendor A says "We only support that on hypervisor X" and Vendor B only supports hypervisor Y. So you run your production environment and give each vendor what they ask for. If they say "Sorry, no support for you, we only support X", you put them on X and they can't say anything.
-
@DustinB3403 said:
@Dashrender but it would only be a partial test at best.
You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.
You could disable the NIC.
-
@BRRABill said:
What about installing the hypervisor in the cloud? Like if you wanted to test the copy/replication of something like XS to one of the VPS cloud providers? (Vultr accepts ISO uploads, right?)
You'd be installing on top of their hypervisor, right?
Theoretically this works. I don't know of anyone that allows it. And cloud systems rarely (ever?) get multiple IPs, so this gets rather difficult to actually use.
-
@BRRABill said:
@Dashrender said:
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
If Vultr accepts an ISO it would also support this, right?
No relationship.
-
@johnhooks said:
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
This is more common but is what I'd call semi-nesting. Those aren't VMs so it isn't VM on VM, but something akin to a VM on a VM.
-
@Dashrender said:
@johnhooks said:
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
Those are designed to work like that though, aren't they? The containerization doesn't need to know the full spec of the CPU for example, where as a full hypervisor does.
Some are, some are not. If the technology requires hardware virtualization assistance then you still need nesting support.
-
I have actually tested this. When I first started here, they wanted me to build a VDI farm, but they didn't have any hardware for me to do it on. I built my POC with Hyper-V in VMware.
There are some security options that may have to be disabled on the VMware vSwitch, as well as the physical switch related to Mac Spoofing.
I was surprised, because it was actually not terribly awful performance, until you tried to do things like Paint, or run a web browser with hardware acceleration enabled, lol.
If they actually had graphics processors on these VMware servers, it likely would have worked just fine.
-
@dafyre said:
If they actually had graphics processors on these VMware servers, it likely would have worked just fine.
Something like 30% of Amazon's cloud does now.
-
I've actually been using a nested hypervisor for over a month now as I run through the MS 40-710 training. I've got it running on a Dell T110 that currently is running ESXi 5.1 and nested on top of that is 2012R2 Servers and Hyper-V Servers so I can learn in a lab environment. Using Pluralsight videos and can play along with the VMs as I go through the training.