Nesting Hypervisors - when would you do this?
-
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
-
I disagree with never.
I see this as acceptable for testing. If you want to test Hyper-V fail-over, this would be awesome. Only one box needed and you can test how fail-over works.
-
@Dashrender but it would only be a partial test at best.
You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.
-
What about installing the hypervisor in the cloud? Like if you wanted to test the copy/replication of something like XS to one of the VPS cloud providers? (Vultr accepts ISO uploads, right?)
You'd be installing on top of their hypervisor, right?
-
@BRRABill said:
What about installing the hypervisor in the cloud? Like if you wanted to test the copy/replication of something like XS to one of the VPS cloud providers? (Vultr accepts ISO uploads, right?)
You'd be installing on top of their hypervisor, right?
Granted this would be a pretty expensive test, depending on a lot of things...
-
@BRRABill said:
What about installing the hypervisor in the cloud? Like if you wanted to test the copy/replication of something like XS to one of the VPS cloud providers? (Vultr accepts ISO uploads, right?)
You'd be installing on top of their hypervisor, right?
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
-
I heard months ago that MS is working on Hyper-V to enable nesting.. not sure where they are with it today.
-
@Dashrender said:
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
If Vultr accepts an ISO it would also support this, right?
-
@DustinB3403 said:
@Dashrender but it would only be a partial test at best.
You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.
What? A faulty cord? what does that mean? If you're simply talking about having one of the VM hosts fall off the network we found that option this morning in XO per your request.
If you're talking about an intermittent cable - I suppose one could test this, but I've never heard of a test like that.
The same goes with power loss - you can simply kill the VM to simulate power loss to one VM as if a power plug was pulled.
Let's see what other physical situations do we need to worry about? I suppose if you're using shared storage - that would be another VM inside the system, at the same level as the first level nested VMs - again, just kill the VM, or kill the network connect to that VM - both do the same thing as far as the failover situation is considered.
Am I missing a physical situation that can't be simulated?
-
@BRRABill said:
@Dashrender said:
I think Azure and AWS both support installing a hypervisor into the VPS. but not for production.
If Vultr accepts an ISO it would also support this, right?
Not necessarily. If the hypervisor they are using doesn't supported Nested hypervisor, then no, they wouldn't support it.
-
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
-
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
-
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
This is the same way you get a hosted cloud. If you buy hosted ESXi, Hyper-V etc, etc. It's all Nested.
-
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
-
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
-
@johnhooks said:
Another use would be containerization with zones, jails, or LXC. A different type of virtualization, but can be nested in a full vm and even within containers.
Those are designed to work like that though, aren't they? The containerization doesn't need to know the full spec of the CPU for example, where as a full hypervisor does.
-
@Dashrender said:
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
Not sure where you get the have layers of abstraction, you can access the hyper visor from a VM on itself and therefore get to any other VM on the hypervisor. Requires some knowledge to do it but it can be done.
-
@Jason said:
@Dashrender said:
@Jason said:
@Dashrender said:
@Jason said:
@DustinB3403 said:
I agree with Jared, you would never do this. But if you were to do it. You'd likely want to do it with a different "base" hypervisor. (conceptually it just doesn't work otherwise).
IE XenServer with several VM's running your other Hypervisors.
Not sure where you guys get that you shouldn't nest Hpervisors. This is pretty common thing in large setups as a security separation.
Espcially with publicly accessible servers you nest the hypervisors as a degree of separation.
Is this because you don't want to separate it at the hardware level?
Why would you? No reason to when you can have a large pool of Datacenter services, you aren't going to separate the hardware, SANs etc out into seperate pools for each purpose.
You've lost me - what are you gaining by using nested hypervisors on the same hardware? what separation do you gain that the VM's shouldn't already have because they are VMs?
Not sure where you get the have layers of abstraction, you can access the hyper visor from a VM on itself and therefore get to any other VM on the hypervisor. Requires some knowledge to do it but it can be done.
Oh?
-
I think that it makes sense for a lab environment. It's not for production testing, but for learning and stuff it is fine.
For production, I can think of only one case... when you have to trick a vendor.
Vendor A says "We only support that on hypervisor X" and Vendor B only supports hypervisor Y. So you run your production environment and give each vendor what they ask for. If they say "Sorry, no support for you, we only support X", you put them on X and they can't say anything.
-
@DustinB3403 said:
@Dashrender but it would only be a partial test at best.
You couldn't for example test a faulty network cord attached to one of the Virtualized Hypervisors. The best you could do is kill the VM, and see what the others do.
You could disable the NIC.