Container core technology?
-
@pete-s said in Container core technology?:
Because containers needs the linux kernel features.
It is shared kernel. So it has to be Linux or 100% Linux compatible because whatever process is running in the container is a raw Linux process and if it doesn't get Linux it will be no different than trying to run a Linux binary on Windows natively. Containers don't aid in compatibility in that sense, they still require a totally compatible kernel to be shared with them.
-
@pete-s said in Container core technology?:
And when you run containers on another OS such as Docker on Windows
In reality, Docker never runs on Windows. By definition if it is Docker it has to run on Linux. So it's always Docker on Linux, and Linux isn't an end user process, so that can't run on Windows either. But it can run on Hyper-V. So when doing this, MS really has to work hard to hide all of the under the hood magic. But it fires up Hyper-V, creates a Linux virtual machine, fires up that VM, and runs Docker on that!
-
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
-
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
-
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
Correct, the kernel really can't tell.
-
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
Correct, the kernel really can't tell.
If we look at security, doesn't that mean that it's the same as well?
I mean it's the kernel that is responsible for the isolation of the groups of processes.
-
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
Correct, the kernel really can't tell.
If we look at security, doesn't that mean that it's the same as well?
I mean it's the kernel that is responsible for the isolation of the groups of processes.
If your concern is the stability of the system, yes it is the same. If your concern is the isolation between processes, containers basically crank the kernel security all the way up. Technically anything a container can do you can do with just the OS. Containerizing is basically the ultimate in kernel level isolation settings. So technically, security is the same. In practice, it's a lot of security no one ever tries to enable otherwise.
-
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
Correct, the kernel really can't tell.
If we look at security, doesn't that mean that it's the same as well?
I mean it's the kernel that is responsible for the isolation of the groups of processes.
If your concern is the stability of the system, yes it is the same. If your concern is the isolation between processes, containers basically crank the kernel security all the way up. Technically anything a container can do you can do with just the OS. Containerizing is basically the ultimate in kernel level isolation settings. So technically, security is the same. In practice, it's a lot of security no one ever tries to enable otherwise.
Thanks! I just wanted to make sure my high-level understanding of the underlying technology was right.
Since it's abstracted away, it's not the first thing mentioned when you look at different container management solutions.
-
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
OK, but it's still just isolated processes in the kernel, right? So from the kernel's perspective it's all the same.
Correct, the kernel really can't tell.
If we look at security, doesn't that mean that it's the same as well?
I mean it's the kernel that is responsible for the isolation of the groups of processes.
If your concern is the stability of the system, yes it is the same. If your concern is the isolation between processes, containers basically crank the kernel security all the way up. Technically anything a container can do you can do with just the OS. Containerizing is basically the ultimate in kernel level isolation settings. So technically, security is the same. In practice, it's a lot of security no one ever tries to enable otherwise.
Thanks! I just wanted to make sure my high-level understanding of the underlying technology was right.
Since it's abstracted away, it's not the first thing mentioned when you look at different container management solutions.
No kidding. It's to the point that most container management solutions make sweeping assumptions about containers that aren't true at all. It's become like "cloud" where you find out that 90% of things called "cloud" aren't about cloud or only make sense in super specific, totally never stated, situations.
-
@scottalanmiller said in Container core technology?:
@pete-s said in Container core technology?:
So whatever container solution you run, the core technology is the same.
It varies a lot. Docker is a super lean container tech, meant to run a process and its tightly coupled processes. But LXC includes the entire operating system sans kernel. So if you are using LXC containers, you can run Ubuntu on Fedora, Fedora on CentOS, CentOS on Ubuntu, Alpine on Ubuntu, CentOS on CentOS... the sky is the limit as long as they are okay sharing the same kernel compilation settings and version.
You can run an init process in an OCI container. It's assumed you pretty much won't but it is possible. It's helpful for testing some things and makes it work similarly to something like LXC/LXD.