Virtual Machines vs Containers
-
@aaronstuder Thats a loaded questions. In short its about scale and security.
Containers are better for scale, but much harder to secure.
Virtual Machines are isolated but there is no scalability beyond the resources you assign it.
Developers are more likely to use containers, sysadmins are more likely to design VM farms that are app specific.
This is a gross generalization but the question was also very broad.
-
@rustcohle hit the nail on the head I think.
Containers are lighter, but share a bit more resources than many people are okay with. They also have or tend to have some pretty severe limits and are not transparent (a container knows it is a container.)
VMs are complete system appearances and thus can be far more secured and the running workload can't tell it is in a VM. They have more overhead than a container, but the gap between the two is getting really tiny.
-
Containers all share a single kernel. So you don't really have separate operating systems, but just "application spaces" on top of a single OS.
-
I tend to like containers more when I know that my system is going to be dedicated to a singular workload type. Like I just want to split up Ubuntu to keep apps from interfering with each other. Then containers are lighter and make sense.
-
Ya containers are useful when you need hundreds to thousands of machines running, or specific cases beneath that.
I can spin up a VM as fast as I can a container so there isn't a lot of advantage for small numbers.
-
@stacksofplates said in Virtual Machines vs Containers:
Ya containers are useful when you need hundreds to thousands of machines running, or specific cases beneath that.
I can spin up a VM as fast as I can a container so there isn't a lot of advantage for small numbers.
I think that containers add risk of effort - meaning if I use containers it's unlikely that I can only use containers, so I then have to have multiple management systems for dealing with just one or two things. That's not helpful. Virtualization is much more likely to give me "one system to rule them all".
-
@scottalanmiller said in Virtual Machines vs Containers:
@stacksofplates said in Virtual Machines vs Containers:
Ya containers are useful when you need hundreds to thousands of machines running, or specific cases beneath that.
I can spin up a VM as fast as I can a container so there isn't a lot of advantage for small numbers.
I think that containers add risk of effort - meaning if I use containers it's unlikely that I can only use containers, so I then have to have multiple management systems for dealing with just one or two things. That's not helpful. Virtualization is much more likely to give me "one system to rule them all".
I can think of one time I used containers (LXC) that was actually useful. It was when I had a XenServer and I used it for XO. I had an ansible playbook that cloned the container and updated XO. If it broke for some reason, I could just destroy the container and start the old one again.
-
One big case is licensing. On Windows, you can get two VMs from one standard license. But each VM can run "unlimited" containers. That's pretty cool. So Windows, as is often the case, has its own use cases around licensing that arise that don't normally affect other OSes.
-
@scottalanmiller said in Virtual Machines vs Containers:
One big case is licensing. On Windows, you can get two VMs from one standard license. But each VM can run "unlimited" containers. That's pretty cool. So Windows, as is often the case, has its own use cases around licensing that arise that don't normally affect other OSes.
Ya I'm not sure how RedHat handles this either. I've never used their Atomic host, so I'm not sure what the licensing situation is with containers. I know they went with Docker over LXC, but I don't know if it's "rhel" containers or "centos" containers.
-
That's funny, was just talking about this quite a bit today with @scottalanmiller. It really comes down to licensing. I would rather license one VM and run containers if I have licensing issues, but if the licensing is no sweat, then I would rather scale out VMs as needed to separate applications. There would be the question, "if you're having licensing limitations then, why couldn't you just use XenServer and have everything Linux?"
Aside from licensing, it really is just preference. By isolating resources, both do the same thing. One could argue that containers are more light weight, but not enough to notice with today's technology. I prefer VMs over containers though, if licensing doesn't play a role.
Also adding, if I wanted to squeeze every last little bit of performance out of a machine, I would consider containers.
-
Application architecture and the infrastructure that your application is going to be running on is a really important consideration as well. If you have an application or service that regularly sees huge deviations in use and load, you'll probably benefit more from using containers and making your workloads ephemeral.
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
However, if you have a monolithic app, containers are probably not the answer.
-
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
-
Why not use both....
Have a Docker/Kubernetes endpoint that "Forks" a fresh VM when a container command is run in a few ms, and allows you to do full resource management, network micro-segmentation while your developers get the "speed and easy of deployment" of a container?
Most people don't need 20K Containers, they just have developers who want to use existing container framework tools for deployment.
Virtual Machine admins don't want to see a single VM using 5000 IP's from DHCP with no visibility into what resources it's consuming, or the ability to secure 3 tier apps and things.talking to IDC and others the majority of containers are sitting inside VM's and will continue to. Bare metal container farms are only for the most extreme of use cases.
http://www.vmware.com/products/vsphere/integrated-containers.html
-
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
-
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
-
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
I know KVM can do dynamic resource allocation. You have to set a max number beforehand, but you can change RAM and CPU on the fly as long as its the same or under your max.
Not sure about other hypervisors.
-
@stacksofplates said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
I know KVM can do dynamic resource allocation. You have to set a max number beforehand, but you can change RAM and CPU on the fly as long as its the same or under your max.
Not sure about other hypervisors.
ESXi definitely has some cool tech around that kind of stuff, too. And that's just at the hypervisor level. You still have cloud-style allocation on top of that.
-
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
Right but aren't you wasting a lot of resources at that point as well? I'm not sure how the licensing works with containers running only a set of services required for the app to run, but I'd think it would be a lot more cost effective resource wise than running an entire OS
-
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
Right but aren't you wasting a lot of resources at that point as well? I'm not sure how the licensing works with containers running only a set of services required for the app to run, but I'd think it would be a lot more cost effective resource wise than running an entire OS
Wasting them how? One of the big points about containers vs. VMs is that the overhead difference is trivial. Containers are lighter and faster, but by a tiny amount and there are complexities and limitations that come with that - like having to share one kernel instead of getting a unique kernel for each workload. So if you need to update the kernel or have anything compiled against a kernel it can cause problems.
-
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@wirestyle22 said in Virtual Machines vs Containers:
@scottalanmiller said in Virtual Machines vs Containers:
@RamblingBiped said in Virtual Machines vs Containers:
Application isn't being used heavily during 3:00am EST? Start culling under-utilized nodes/containers and killing them as they fall off. When your still running nodes/containers start to see higher activity, respond by spinning up more nodes and containers and bringing them online to meet demand.
You can already do all of that with VMs, though.
Dynamic resource allocation? Didn't know that outside of thin provisioning which wouldn't fit this use case. What is this called so I can research it?
Just "turn them off". This is the entire concept of cloud computing. You can turn off a container, you can turn off a VM in the same way.
Right but aren't you wasting a lot of resources at that point as well? I'm not sure how the licensing works with containers running only a set of services required for the app to run, but I'd think it would be a lot more cost effective resource wise than running an entire OS
Wasting them how? One of the big points about containers vs. VMs is that the overhead difference is trivial. Containers are lighter and faster, but by a tiny amount and there are complexities and limitations that come with that - like having to share one kernel instead of getting a unique kernel for each workload. So if you need to update the kernel or have anything compiled against a kernel it can cause problems.
Why do containers exist if it's trivial though? I'm assuming containers become 'best practice' at extremely high work loads?