Patching systems - how should you do this?
-
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
-
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
That would depend on your goals. Personally, I'd use a Gluster setup, or either rsync between the servers...
Without Gluster or rsync, you're still dead in the water when your NFS server reboots for updates.
With rsync, you run into issues if you website(s) support file uploads.
-
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
That would depend on your goals. Personally, I'd use a Gluster setup, or either rsync between the servers...
Without Gluster or rsync, you're still dead in the water when your NFS server reboots for updates.
With rsync, you run into issues if you website(s) support file uploads.
Well I meant an NFS export from some clustered system. Like a Gluster cluster or an Isilon.
-
@stacksofplates said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
That would depend on your goals. Personally, I'd use a Gluster setup, or either rsync between the servers...
Without Gluster or rsync, you're still dead in the water when your NFS server reboots for updates.
With rsync, you run into issues if you website(s) support file uploads.
Well I meant an NFS export from some clustered system. Like a Gluster cluster or an Isilon.
Okay, yeah. In that case if your NFS server is reduntant or fault tolerant whatever you want to call it, then you're in good shape.
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
-
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
That would depend on your goals. Personally, I'd use a Gluster setup, or either rsync between the servers...
Without Gluster or rsync, you're still dead in the water when your NFS server reboots for updates.
With rsync, you run into issues if you website(s) support file uploads.
Well I meant an NFS export from some clustered system. Like a Gluster cluster or an Isilon.
Okay, yeah. In that case if your NFS server is reduntant or fault tolerant whatever you want to call it, then you're in good shape.
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
I would prob use file based rather than block. With Gluster or GFS2 you can have 3 nodes so if you take one physical machine down for an update you don't have to worry about the other going down.
-
@stacksofplates said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
That would depend on your goals. Personally, I'd use a Gluster setup, or either rsync between the servers...
Without Gluster or rsync, you're still dead in the water when your NFS server reboots for updates.
With rsync, you run into issues if you website(s) support file uploads.
Well I meant an NFS export from some clustered system. Like a Gluster cluster or an Isilon.
Okay, yeah. In that case if your NFS server is reduntant or fault tolerant whatever you want to call it, then you're in good shape.
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
I would prob use file based rather than block. With Gluster or GFS2 you can have 3 nodes so if you take one physical machine down for an update you don't have to worry about the other going down.
Makes sense.
-
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@stacksofplates said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@Dashrender said in Patching systems - how should you do this?:
Does this then imply that you either
- have to have shared storage to live transfer VMs between hosts for patches, or
- expect downtime on VMs while a host is updated?
or 3) have an HA application that doesn't have a dependency at that level, like an AD DC.
Awesome - exactly what I was looking for.
With web servers, for example, this would be behind the load balacing layer. Just remove a server from the LB, patch and add it back in.
Or don't patch and just spin up a new one with the data store somewhere else and kill the old one.
Yes, this is the DevOps model for this.
So say you have 5 web servers running. Would you mount the data store from an NFS export or would you run something like Gluster, GFS2, etc across each physical server that the web servers are on?
Depends, in a lot of cases you would deploy a local image via Ansible or Chef and have it deploy to each node at build time. If you have NFS or something, you introduce a new dependency.
-
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
You could do a two node this way. But for web servers with static files, why not just keep the files local and increase speed, simplify things and reduce complexity?
-
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
You could do a two node this way. But for web servers with static files, why not just keep the files local and increase speed, simplify things and reduce complexity?
For systems that are static, sure. But what about something like Wordpress where files actually can be uploaded?
[I realize that may not be the world's greatest example, lol]
-
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
You could do a two node this way. But for web servers with static files, why not just keep the files local and increase speed, simplify things and reduce complexity?
For systems that are static, sure. But what about something like Wordpress where files actually can be uploaded?
[I realize that may not be the world's greatest example, lol]
You would store those centrally, but not the main files. Often you would have dedicated image storage in a case that you were going to multi-node scale out web, not storing or serving from the application server. So typically tackled in a completely different way. Either through a CDN that you buy or build yourself.
Just look at ML, getting images to CDN is top priority from the very beginning.
-
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
You could do a two node this way. But for web servers with static files, why not just keep the files local and increase speed, simplify things and reduce complexity?
For systems that are static, sure. But what about something like Wordpress where files actually can be uploaded?
[I realize that may not be the world's greatest example, lol]
You would store those centrally, but not the main files. Often you would have dedicated image storage in a case that you were going to multi-node scale out web, not storing or serving from the application server. So typically tackled in a completely different way. Either through a CDN that you buy or build yourself.
Just look at ML, getting images to CDN is top priority from the very beginning.
I wasn't thinking about images, but, I get that idea. I was thinking more along the lines of user submitted uploads... but those could be sent into a database somewhere.
-
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller said in Patching systems - how should you do this?:
@dafyre said in Patching systems - how should you do this?:
@scottalanmiller -- how would you build a fault tolerant NFS server for something like this? Two Linux systems + DRBD?
You could do a two node this way. But for web servers with static files, why not just keep the files local and increase speed, simplify things and reduce complexity?
For systems that are static, sure. But what about something like Wordpress where files actually can be uploaded?
[I realize that may not be the world's greatest example, lol]
You would store those centrally, but not the main files. Often you would have dedicated image storage in a case that you were going to multi-node scale out web, not storing or serving from the application server. So typically tackled in a completely different way. Either through a CDN that you buy or build yourself.
Just look at ML, getting images to CDN is top priority from the very beginning.
I wasn't thinking about images, but, I get that idea. I was thinking more along the lines of user submitted uploads... but those could be sent into a database somewhere.
Those would be identical to images. Image, PDF, Word Doc... it's all the same to a CDN.
-
Common CDN that you might look at for this kind of thing is Amazon S3, Rackspace Cloudfiles, Backblaze B2, etc.