Docker or Small VMs
-
@hobbit666 said:
OK so I started to look into docker and I love the idea of having little "containers" running on a single machine but them being independent to a level.
My thinking was to stop running 3-4 Linux machines that are using up resources on my ESXi server and run all 4 applications on a single Linux server.But is this the way to go? or should I stick with them on separate VM's but make the resources a bit leaner and only give what they need.
Small VMs easier to manage, configure, and troubleshoot when all is said-and-done.
-
Because of all of the hype I too looked into containers recently. Then Scott enlighted me by telling me it's mainly for DevOps - Consider I don't work in that I didn't really have any clue what that was, so he told me about containers in another way.
Containers are really only useful if you need many identical apps running. If you have one app here and one app there, you'll gain very little from containers.
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
-
@Dashrender said:
Because of all of the hype I too looked into containers recently. Then Scott enlighted me by telling me it's mainly for DevOps - Consider I don't work in that I didn't really have any clue what that was, so he told me about containers in another way.
Containers are really only useful if you need many identical apps running. If you have one app here and one app there, you'll gain very little from containers.
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
512 MB for the OS? That seems like a lot many *nix distributions can run at under 100 MB, we have a few that run ~20MB with apps running.
-
@coliver said:
@Dashrender said:
Because of all of the hype I too looked into containers recently. Then Scott enlighted me by telling me it's mainly for DevOps - Consider I don't work in that I didn't really have any clue what that was, so he told me about containers in another way.
Containers are really only useful if you need many identical apps running. If you have one app here and one app there, you'll gain very little from containers.
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
512 MB for the OS? That seems like a lot many *nix distributions can run at under 100 MB, we have a few that run ~20MB with apps running.
Maybe I'm over cautious -
So what would be an example that anyone can come up with (Docker folks) were you might need a bunch of duplicate programs running?
-
@DustinB3403 said:
So what would be an example that anyone can come up with (Docker folks) were you might need a bunch of duplicate programs running?
The one example that I saw was for code development and testing. I don't use docker or containerization so I'm not sure if that is correct or not.
-
You could use traditional containers (LXC, jails, zones) to do this. Each LXC container has a console and can be run like a VM.
-
@hobbit666 said:
OK so I started to look into docker and I love the idea of having little "containers" running on a single machine but them being independent to a level.
My thinking was to stop running 3-4 Linux machines that are using up resources on my ESXi server and run all 4 applications on a single Linux server.But is this the way to go? or should I stick with them on separate VM's but make the resources a bit leaner and only give what they need.
Are you making Ansible or Chef recipes to handle all of this? Are you moving to DevOps? Unless those things are true, no Docker won't make any sense for you. Containers do not really lighten the load on your hypervisor, that's not the reason for using them.
-
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
-
@coliver said:
512 MB for the OS? That seems like a lot many *nix distributions can run at under 100 MB, we have a few that run ~20MB with apps running.
Especially if you go with FreeBSD. I've seen pretty heavily used systems at 80MB.
-
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
This is where some of us have to wrap our head around. Yes, I know Linux runs great in smaller sets of RAM... but I was always of the mindset that More Is Better (tm). Especially if I am wanting to run hefty apps, like Plex or Heavy hitter apps like Zabbix or ownCloud...
-
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
My jump box and zerotier controller are both at 256 and haven't had any problems yet.
-
@DustinB3403 said:
So what would be an example that anyone can come up with (Docker folks) were you might need a bunch of duplicate programs running?
Well, the design mostly came about because of web applications. So let me present a generic example that is mirrored over and over again in the real world. Let's say... a custom web application (store, blog, whatever.)
You have at least three tiers, a load balancing tier, an application tier and a database tier.
First tier, let's say that runs HAProxy. You'll have three of these VMs or containers at least.
Second tier, let's say you are running a PHP application on Apache or NGinx.
Third tier, let's say you have a database on Redis. You'll need at least three of these.
Then, on a fourth tier, you'll want at least three Redis Sentinels to handle monitoring.
Each layer gets several identical VMs or containers as a starting point and potentially dozens or even hundreds as the site gets busy.
-
@dafyre said:
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
This is where some of us have to wrap our head around. Yes, I know Linux runs great in smaller sets of RAM... but I was always of the mindset that More Is Better (tm). Especially if I am wanting to run hefty apps, like Plex or Heavy hitter apps like Zabbix or ownCloud...
The "amount needed" is always the best amount. Too little is bad, too much is too. I've had financial trading applications noticeably slowed down due to have too much unused memory on the system.
-
@johnhooks said:
You could use traditional containers (LXC, jails, zones) to do this. Each LXC container has a console and can be run like a VM.
So much so that we still call them VMs
-
@scottalanmiller said:
@dafyre said:
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
This is where some of us have to wrap our head around. Yes, I know Linux runs great in smaller sets of RAM... but I was always of the mindset that More Is Better (tm). Especially if I am wanting to run hefty apps, like Plex or Heavy hitter apps like Zabbix or ownCloud...
The "amount needed" is always the best amount. Too little is bad, too much is too. I've had financial trading applications noticeably slowed down due to have too much unused memory on the system.
Very few applications care about too much. Really only when you are into real-time processing and such does that play into it.
-
@JaredBusch said:
Very few applications care about too much. Really only when you are into real-time processing and such does that play into it.
The latency is still there, just not noticeable. I didn't mean to imply that you'd notice or that the world would end, only that you are no longer moving forward in performance once you get to the "right" amount but stop or actually start creeping backwards. Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
-
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
-
@dafyre said:
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
You can do the same with KVM.
-
@johnhooks said:
@dafyre said:
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
You can do the same with KVM.
I knew it was available on other platforms, however, my experience (at the moment) is limited only to 2 of them.
Edit: This is good to know about KVM. I'll soon have my desktop freed up at home.