Containers in IT
-
@Dashrender said:
So the application - the web daemon - can be in a container, and it just pulls data from sources behind it. OK.
This is for load balancing?
This is what I am wondering too. What is the advantage of a container over a VM? Both can be built and destroyed in moments but the VM has added flexibility that the container doesn't necessarily have. Would this be for performance and resource utilization?
-
@coliver said:
@Dashrender said:
So the application - the web daemon - can be in a container, and it just pulls data from sources behind it. OK.
This is for load balancing?
This is what I am wondering too. What is the advantage of a container over a VM? Both can be built and destroyed in moments but the VM has added flexibility that the container doesn't necessarily have. Would this be for performance and resource utilization?
Containers are lighter and faster, have different licensing concerns, are smaller to deploy, smaller to store, easier to pass around, etc.
-
Also, containers provide some of these features for shops too small to have cloud to do this with VMs.
-
@scottalanmiller said:
Also, containers provide some of these features for shops too small to have cloud to do this with VMs.
Like you were talking about earlier... Doing both can be beneficial. Have a couple of big VMs for LXC containers, and what-not... You get the benefits of both virtualization and containers.
-
Yes, and I think that that is the direction that we will see most companies go.
-
That's what I have. I have a VM that hosts LXC containers. I have XO in one container. It makes updating easy. I can use ansible to either clone the container and update XO or just fire up a new container and install XO quickly. I don't need things like reboot scripts then because I can just include that in the ansible playbook and reboots take about 1 second.
It also allows me to pass variables to the playbook so I can install XO from different git branches.
Another advantage is if you want to send a file to another container you can just copy from the container directory and put it inside the other container. Very quick with large files vs using the network. That is assuming you're using a dir backing store and not a logical volume or something else.
-
Ubuntu is making some big strides with LXC. they call it LXD and it will have live migration of containers.
-
I have a small EC2 instance running a containerized instance of discourse for a set of support forums we use for supporting a specific product. It has been up for ~8 months without issue.
-
I also have my website in an unprivileged container. That way if someone were to gain root access to the web server and somehow break out of the container, the only thing they can affect is the home folder for that non-sudo user.
-
Sorry to necro this but it's relevant to my new job. My understanding of the benefit of containers is resource management. Hypervisors emulate virtual hardware essentially and they are more resource intensive because of that where as containers use a shared operating system which makes them much more efficient resource wise but also creates limitations. You can also have more server applications running for less money (reduced cost of hardware). Especially if you have a reason to run multiple copies of an application. There are positives and negatives to it.
Am I looking at this correctly @scottalanmiller ?
-
@wirestyle22 It also allows you to add in additional levels of security by essentially walling off each instance of a service versus running said service in parallel with other services that your application(s) might depend upon. So instead of having a single virtual machine running Apache, MySQL, and PHP; you'd have a container for each service, each with their own hardened attack surface. Also, it allows for a more efficient and responsive dynamic scaling model for applications that is mostly platform independant.
-
@RamblingBiped said in Containers in IT:
@wirestyle22 It also allows you to add in additional levels of security by essentially walling off each instance of a service versus running said service in parallel with other services that your application(s) might depend upon. So instead of having a single virtual machine running Apache, MySQL, and PHP; you'd have a container for each service, each with their own hardened attack surface. Also, it allows for a more efficient and responsive dynamic scaling model for applications that is mostly platform independant.
Makes sense
-
The tough part when it comes to dealing with containers (at least for me), is picking the platform you are going to run them on and then learning all the tools.
Do you use Docker? Rocket? LXC?
Do you automate configuration management and deployment using Puppet? Chef? Ansible?
Do you run them bare metal or nest them in VM instances on a Hypervisor/Cluster?
And those are just the tools that come to mind. You also need a certain level of proficiency when it comes to shell scripting, and many of the other frequently used languages (Python, Ruby, Javascript, PHP...).
There are so many pieces of the puzzle that really need to be in place before containerization of workloads can become a viable replacement for current virtualized infrastructures. There are many projects that have already adopted the format and written their own scripts/APIs to really simplify the process of deploying and maintaining their products in containers. Discourse forum software is a great example. Everything is managed from a single script, instead of having to interface with Docker directly.
-
@johnhooks said in Containers in IT:
Ubuntu is making some big strides with LXC. they call it LXD and it will have live migration of containers.
LXD is actually a management layer of LXC. Ubuntu is very vocal that it is still LXC containers, just with the extra LXD technology on top making it nicer than usual.
-
@wirestyle22 said in Containers in IT:
Sorry to necro this but it's relevant to my new job. My understanding of the benefit of containers is resource management. Hypervisors emulate virtual hardware essentially and they are more resource intensive because of that where as containers use a shared operating system which makes them much more efficient resource wise but also creates limitations. You can also have more server applications running for less money (reduced cost of hardware). Especially if you have a reason to run multiple copies of an application. There are positives and negatives to it.
Am I looking at this correctly @scottalanmiller ?
That's pretty good. It is a "lighter" virtualization technology. Full Disparate Emulation is the heaviest and Jails is the lightest. Here is the basic scope..... starting from heaviest (most overhead) to the lightest (least overhead.) The heavier you go, the most options and power than you have as far as features and compatibility. The lighter you go, the better density and speed you can get.
Emulation - Full Virtualization - Paravirtualization - Hardware Segregation - Containers - "Jails"
-
@RamblingBiped said in Containers in IT:
@wirestyle22 It also allows you to add in additional levels of security by essentially walling off each instance of a service versus running said service in parallel with other services that your application(s) might depend upon. So instead of having a single virtual machine running Apache, MySQL, and PHP; you'd have a container for each service, each with their own hardened attack surface. Also, it allows for a more efficient and responsive dynamic scaling model for applications that is mostly platform independant.
Although all of that segregation you can do with VMs as well, and many of us have for years.
-
@RamblingBiped said in Containers in IT:
Do you automate configuration management and deployment using Puppet? Chef? Ansible?
This particular item (DevOps vs. Snowflakes) applies to VMs and containers equally.