Wordpress on Vultr 768
-
@dafyre Exactly. Didn't want to waste any more time on it, especially since it was still in testing/setup stages.
-
@fuznutz04 said in Wordpress on Vultr 768:
@dafyre Exactly. Didn't want to waste any more time on it, especially since it was still in testing/setup stages.
Makes sense.
-
@scottalanmiller said in Wordpress on Vultr 768:
@thwr said in Wordpress on Vultr 768:
Most Wordpress sites only have like 128 MB, maybe 256 MB.
I doubt that most do, as it's effectively impossible for many years to even get VPS that small. Rackspace minimum is 512MB and DO/Vultr is like 768MB.
I wasn't sure what he got at that point. Wordpress runs "fine" on 128MB, but that does not take into account what the operating system, Apache/Nginx and MySQL need.
A VM with Wordpress and a full webserver/database server stack should probably have like 512 MB at least.
-
@thwr said in Wordpress on Vultr 768:
I wasn't sure what he got at that point. Wordpress runs "fine" on 128MB, but that does not take into account what the operating system, Apache/Nginx and MySQL need.
It should run fine on 16MB then
-
@thwr said in Wordpress on Vultr 768:
A VM with Wordpress and a full webserver/database server stack should probably have like 512 MB at least.
For any real use, yeah. We have it working on 256MB, but it sucks.
-
@scottalanmiller said in Wordpress on Vultr 768:
@thwr said in Wordpress on Vultr 768:
A VM with Wordpress and a full webserver/database server stack should probably have like 512 MB at least.
For any real use, yeah. We have it working on 256MB, but it sucks.
Probably due to Wordpress. Someone once said: "That's the most frustrating piece of code I've ever seen". Don't have the link anymore...
-
@thwr said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
@thwr said in Wordpress on Vultr 768:
A VM with Wordpress and a full webserver/database server stack should probably have like 512 MB at least.
For any real use, yeah. We have it working on 256MB, but it sucks.
Probably due to Wordpress. Someone once said: "That's the most frustrating piece of code I've ever seen". Don't have the link anymore...
No, it's because MariaDB and Apache like a bit of room to breathe. Then PHP needs some overhead, too.
-
@scottalanmiller said in Wordpress on Vultr 768:
@thwr said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
@thwr said in Wordpress on Vultr 768:
A VM with Wordpress and a full webserver/database server stack should probably have like 512 MB at least.
For any real use, yeah. We have it working on 256MB, but it sucks.
Probably due to Wordpress. Someone once said: "That's the most frustrating piece of code I've ever seen". Don't have the link anymore...
No, it's because MariaDB and Apache like a bit of room to breathe. Then PHP needs some overhead, too.
That was a joke...
-
So reinstalling CentOS7, then installing the LAMP stack, followed by Wordpress, seems to have solved my issues. No issues like I was having earlier. Also, I have much more available memory than what i had previously. Strange problem.
-
@fuznutz04 said in Wordpress on Vultr 768:
So reinstalling CentOS7, then installing the LAMP stack, followed by Wordpress, seems to have solved my issues. No issues like I was having earlier. Also, I have much more available memory than what i had previously. Strange problem.
Too bad we didn't have more time to delve into it.
-
Don't forget to set up your update and reboot schedule now.
-
@scottalanmiller said in Wordpress on Vultr 768:
Don't forget to set up your update and reboot schedule now.
Would love to hear a best practice / how-to for this. Any suggestions?
-
@fuznutz04 said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
Don't forget to set up your update and reboot schedule now.
Would love to hear a best practice / how-to for this. Any suggestions?
There isn't a strict best practice, or if there is it is really complex to state. But as a "guideline" you want reboots rather often. Weekly is generally best, monthly at the longest.
What NTG does, which I think is pretty good, is that we use constant updates via the yum-cron tool or similar. This does a random patching cycle several times per day which helps to keep load to a minimum at any given time. We run 24x7 so this is great for us. If you run 8-5, for example, you might want to schedule a known patch time at 6:37 daily (avoid hard five and ten minute intervals, especially quarter, half and full hours).
If your server is truly idle daily, reboot daily! Why not. Most shops have a good window each week for a reboot. NTG does late Friday night for some workloads. And early Sunday morning for others (specifically phone and monitoring.) We are strategic about when different workloads will be in use. For example, ScreenConnect might remain busy on Friday evening, so we don't reboot it then but rather during a meeting or something.
-
@scottalanmiller said in Wordpress on Vultr 768:
@fuznutz04 said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
Don't forget to set up your update and reboot schedule now.
Would love to hear a best practice / how-to for this. Any suggestions?
There isn't a strict best practice, or if there is it is really complex to state. But as a "guideline" you want reboots rather often. Weekly is generally best, monthly at the longest.
What NTG does, which I think is pretty good, is that we use constant updates via the yum-cron tool or similar. This does a random patching cycle several times per day which helps to keep load to a minimum at any given time. We run 24x7 so this is great for us. If you run 8-5, for example, you might want to schedule a known patch time at 6:37 daily (avoid hard five and ten minute intervals, especially quarter, half and full hours).
If your server is truly idle daily, reboot daily! Why not. Most shops have a good window each week for a reboot. NTG does late Friday night for some workloads. And early Sunday morning for others (specifically phone and monitoring.) We are strategic about when different workloads will be in use. For example, ScreenConnect might remain busy on Friday evening, so we don't reboot it then but rather during a meeting or something.
Great info, thanks. I have a few boxes serving multiple functions such as web, logging, etc, and a good number running a PBX OS of some sort. Right now, I manually restart everything, including the PBXs, because the number of servers are small, but I fully realize that this won't scale properly. Do you follow the same schedule for PBX distros? Do you schedule cron locally at each server, or from a central point using a tool?
-
@fuznutz04 said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
@fuznutz04 said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
Don't forget to set up your update and reboot schedule now.
Would love to hear a best practice / how-to for this. Any suggestions?
There isn't a strict best practice, or if there is it is really complex to state. But as a "guideline" you want reboots rather often. Weekly is generally best, monthly at the longest.
What NTG does, which I think is pretty good, is that we use constant updates via the yum-cron tool or similar. This does a random patching cycle several times per day which helps to keep load to a minimum at any given time. We run 24x7 so this is great for us. If you run 8-5, for example, you might want to schedule a known patch time at 6:37 daily (avoid hard five and ten minute intervals, especially quarter, half and full hours).
If your server is truly idle daily, reboot daily! Why not. Most shops have a good window each week for a reboot. NTG does late Friday night for some workloads. And early Sunday morning for others (specifically phone and monitoring.) We are strategic about when different workloads will be in use. For example, ScreenConnect might remain busy on Friday evening, so we don't reboot it then but rather during a meeting or something.
Great info, thanks. I have a few boxes serving multiple functions such as web, logging, etc, and a good number running a PBX OS of some sort. Right now, I manually restart everything, including the PBXs, because the number of servers are small, but I fully realize that this won't scale properly. Do you follow the same schedule for PBX distros? Do you schedule cron locally at each server, or from a central point using a tool?
I've done both, different strokes for different folks (or situations). At NTG we use local cron. I like local cron whenever possible because it has the benefits of essentially zero overhead and dependencies and in those cases where other things, like networking, fail it will go ahead and reboot anyway potentially fixing itself without intervention!
-
@scottalanmiller Yes! The magic of reboot!
-
Good reading, as well: http://www.smbitjournal.com/2011/02/why-we-reboot-servers/
-
@scottalanmiller said in Wordpress on Vultr 768:
Read that one already Couldn't agree more. There have been more than once where I've heard stories of new IT guys going into a new roll, not rebooting a server, and finds out months down the road after an extended power outage, that the server will not come back online.
-
@fuznutz04 said in Wordpress on Vultr 768:
@scottalanmiller said in Wordpress on Vultr 768:
Read that one already Couldn't agree more. There have been more than once where I've heard stories of new IT guys going into a new roll, not rebooting a server, and finds out months down the road after an extended power outage, that the server will not come back online.
That's what killed IBM in the big Australian voting project a few weeks ago. Zero reboots... ever.
-
@scottalanmiller said in Wordpress on Vultr 768:
Good reading, as well: http://www.smbitjournal.com/2011/02/why-we-reboot-servers/
I used to work for a business that hosted a CMS as well as hundreds of websites and email hosting. Clients were all across the globe, so nightly restarts as well as weekend updates were out of the question. The best time to patch and reboot was Thursday nights, once a month. Having a web farm, or database farm would have reduced overall downtime, but in the end, it wasn't too bad. a couple minutes of downtime per server at the most isn't bad once a month in my opinion. Patching hosts were done less frequently, but that involved zero downtime, as all VMs were migrated to other nodes during the update.