Linux Project
-
@Aaron-Studer said:
I see. Updates to IP addresses in CloudFlare are very fast, so changing the IP should work, with minimal downtime. At lease I would have options, and not be completely screwed.
Yes, that's really best. Multi-site failover is a tough thing. You could, in theory, build a system that automatically updated DNS when a failure was detected, but the problems that this would cause are immense, there is a reason that no one does this.
-
@scottalanmiller This is what I was thinking of I think
http://www.gearbytes.com/2010/11/configuring-dns-round-robin-in-windows-dns-for-load-balancing/
-
With Active-Passive I've only done it manually. for Automated you'd usually have Active/Active with loads of nodes both for failover and load-balancing. Unless it's a e-commerce or other money generating site it's not worth it.
-
P.S. Now I feel stupid.
-
@Aaron-Studer said:
@scottalanmiller This is what I was thinking of I think
http://www.gearbytes.com/2010/11/configuring-dns-round-robin-in-windows-dns-for-load-balancing/
Sure, you can do round robin with "any" DNS system. But that would just hose your site as you'd have competing database masters. You'd need a multi-master setup to make that work and it would not help with outages, it would keep sending half of your traffic to the failed site even after it had failed.
-
@Aaron-Studer said:
@scottalanmiller This is what I was thinking of I think
http://www.gearbytes.com/2010/11/configuring-dns-round-robin-in-windows-dns-for-load-balancing/
There are DNS services that will do round robin and detect when a host is down and remove it. The problem is the TTL of the entries isn't always honored. and before that you have no way of making sure traffic goes to your primary host (nslookup doesn't care about the order you put them in on the records)
-
I think that you are over thinking this. Keep things pretty simple. Having two servers and manually failing over from your DNS provider is still far more failover than most sites have.
-
@Aaron-Studer, I've been wanting to do this as well. However, I've moved off AWS so I don't think I'll need to anymore...
-
But Rsync for the Wordpress files and auto-exports and imports of the MySQL databases.
-
I have keys setup between my servers so I can ssh without a password, so if you get that setup, these are my scripts...
On the cloud server:
#!/bin/shcd /var/www/databases
mysqldump -u root --password=mypasswordthanksaj > thanksaj.sql
mysqldump -u root --password=mypassword literaryworksbyaj > literaryworksbyaj.sql
mysqldump -u root --password=mypassword builtbyart > builtbyart.sqlrsync -chavzP --stats /var/www/* [email protected]:/var/www/
**On the local server: **
#!/bin/shcd /var/www/databases/
mysql -u root --password=mypassword thanksaj < thanksaj.sql
mysql -u root --password=mypassword literaryworksbyaj < literaryworksbyaj.sql
mysql -u root --password=mypassword builtbyart < builtbyart.sqlSo I export the MySQL databases to /var/www/databases as .sql files and then rsync them to the local server. The local server imports said files into its local databases. I use cron to schedule all this.
Local server:
0 6,18 * * * /home/aj/scripts/aj-import-wordpress-dbs >> /srv/samba/share/import_wordpress_dbs.log 2>&1Cloud server:
0 5,17 * * * /home/user/scripts/aj-sync-wordpress >> /var/log/aj-logs/sync_wordpress.log 2>&1Then I use Unitrends to backup the local server.
We'll see how this works!
Thanks,
A.J.