load balance your workloads for webserver
-
So right now. I am running NGINX, MARIADB, MEMCACHED, And Some other stuff. my website runs pretty fast. i know there is some tweaking.
-
NGinx is commonly used than anything else for front end caching. This is what will handle any local static file serving needs that you might have. Hopefully you will have a good CDN layer to offload as much as possible. Your NGinx proxy layer is basically your own CDN to keep that load from hitting your PHP application servers that are busy generating WP pages.
-
Then out front you need load balancing. If you are on Amazon, they will normally use F5 BigIP gear for that. That's what the biggest shops use. Everyone else uses HA-Proxy. So assume an entire layer of redundant HA-Proxy hosts.
-
ok. I want to build a template and then spin these machines up when ever someone orders new wordpresss hosting.
-
Then, finally, at the DNS layer, you have a DNS host that will round robin amongst your HA-Proxy hosts to keep any single load balancer from being overwhelmed by the load either. Basically load balancing for your load balancers.
-
@matthewaroth35 said in load balance your workloads for webserver:
So right now. I am running NGINX, MARIADB, MEMCACHED, And Some other stuff. my website runs pretty fast. i know there is some tweaking.
How many VMs are you running? A minimal full install is normally fifteen VMs, three for each layer and then you scale up from there.
-
That doesn't include the NFS hosting if you need that. If you go that route, you'd be looking at another two VMs for that. So seventeen to get started.
-
This is the kind of architecture that you use for large, fast, scalable hosting. You start small with just the seventeen then you monitor closely to see which layers are having issues. If the Apache / PHP layer is getting overwhelmed, then you add more nodes to that layer. So if three isn't enough you go up to four or five. You only grow the layers that are bottlenecking.
-
This is where it is really handy to be working with a tool set like Ansible, Chef, Pupper or Salt so that you have simple methods for rapidly building new hosts.
-
@matthewaroth35 said in load balance your workloads for webserver:
I allowcated 4quad 8gb ram to server iops are threw the roof at times.?
How many instances like that do you have? Have you looked to see if that many cores makes sense? It's not a crazy number, but I'd expect dual core web servers, normally.
You did not mention the storage that you are using. IOPS are probably from the database being hit. Is there a heavy write load? Is there enough cache? 8GB could be really tiny depending on how many sites you are trying to host there.
-
And, of course, a huge question is... do you expect to load balance the environment, or each site? A lot of big hosts have each host running on a single node only. So you don't actually load balance the big layers but rather have stacks for them, which is very different. Load balancing would be more common with a single high performance site, rather than a web hoster.