How many Linux servers do I really need?
-
@scottalanmiller said:
@Dashrender said:
LOL - I was wondering what benefit you'd get from running Windows inside a Linux container? (though I suppose one could say we already do that with XenServer - lol
Same benefits of containers anywhere. Lighter than virtualization.
Running Windows inside a container would be lighter than virtualization?
So you're saying that you see a day when Linux would be installed on the hardware and Windows would be installed inside a container?
Aren't we already doing that with XenServer - is a VM inside XenServer a container? lol - I'm confusing myself.
-
@Dashrender said:
With that being the case - then why do we hear so much about it? OK maybe not actually hear real information - but everywhere I turn - Docker docker docker docker docker this, docker that - and MS will support docker soon too - etc... it seems odd that even places like SpiceWorks at times seems overrun by it - is it simply that SMB IT personal don't want to be left in the dust so we glom onto anything we can?
We hear about cloud computing non-stop there too, yet it applies as a need to no one in the SMB. Sure they can leverage it but not in a cloud way. That it is cloud computing doesn't matter to SMB users, they only care that it is a VPS. Docker is super important - to DevOps shops. The SMB market, especially certain communities, are full of people lacking a lot of tech skills and run off of buzz words and marketing and "what they think the enterprise is doing." Ever wonder how anyone in the SMB even knows about SAN let alone buys one? Same thing. Hear what's cool from the "big boys" and assume that they will sound cool if they talk about it, too.
There are lots of technologies that have little place in normal SMB yet are the major foci of conversation there. I'd call it "enterprise envy."
-
@Dashrender said:
Running Windows inside a container would be lighter than virtualization?
Containerization is lighter than virtualization. That is its sole purpose.
-
@Dashrender said:
Aren't we already doing that with XenServer - is a VM inside XenServer a container? lol - I'm confusing myself.
XenServer is a hypervisor. Windows is virtualized there, not containerized. Don't start applying the word container to things that are not container platforms. VMs run on hypervisors, containers do not.
-
@Dashrender said:
Personally I felt like I missed the beginning of virtualization because to me it felt like it was for enterprise only - of course now it's being touted as the absolute starting point for any project unless you can show specific reasons why it doesn't/can't/won't work for your project (unlike SAN, which should still primarily live in the enterprise)
Virtualization has been for the SMB since the day it was released. It's never been about size or scale. Containers too. SMBs that run Linux have used containers for a decade, it's standard, old hat, so old no one talks about it.
What is interesting today is that three new container players; Docker, Rocket and LXC, have emerged and have a lot of great technology behind them, big communities and are finally being used on a large scale. DevOps has made containers important in a way that it has not been before. In the same ways that cloud and DevOps have made VMs not just important but necessary, containers take this to another level by making things lighter still.
-
@Dashrender said:
maintaining all of these micro VMs seems like such a pain in the ass.
You'll confuse yourself less if you always call them containers and always call the others VMs. Don't mix the terms, it will just be confusing. Only two resulting object terms, VMs and containers.
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
-
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
-
@Dashrender said:
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
It's not the same as managing a single one, but it should be just as easy.
-
@Dashrender said:
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
Ah, I see. I would argue that it is easier to manage, not harder, especially with Linux. The management of the OS is so trivial itself and so repeatable that there is nearly zero overhead from that - remember this isn't Windows. You can easily manage ten Linux boxes for every one of Windows before talking DevOps (these are real numbers from enterprise environments) so keep that in mind. Then consider how much easier it is to manage applications when you have no fear of interaction issues and can isolate the OS/Application for troubleshooting, repair, updates, etc.
For example, you need to do a reboot on the database server but the email server can't go down at the same time - no problem, you can reboot by application.
-
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
-
@johnhooks said:
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
Same issue you will always have with containers.
-
@scottalanmiller said:
@johnhooks said:
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
Same issue you will always have with containers.
Just throwing it out there as a reference
-
@anonymous
How many Linux servers do you need?
All of them. You need them all.
-
@RamblingBiped said:
@anonymous
How many Linux servers do you need?
All of them. You need them all.
/thread
-
Yup, that says it all.