Patch Fast
-
Six years ago I wrote an article on the importance and approach to patching in small environments. Back then, the threats of ransomware and zero day attacks were much, much smaller than they are today. The events of the last week have raised the stakes and fast patching is more important than ever, by a large degree.
Six years ago it was considered critical to get systems patched quickly to avoid security concerns. If six years ago that concern was even on the radar, today it is enormous. The world has changed. High speed breach risks are so much larger than they have ever been. We now know so much more about what kind of attacks are out there and how effective they can be.
Fear, especially a fear of our chosen vendor partners, often leads to patch avoidance - a dangerous reaction. There is a natural tendency to fear the patching process, more than the ignoring of patches, because we have to pull the trigger on the event that might create a problem. Much like how we have an emotional reaction to rebooting servers that have been running for months or years. Humans like to ignore risks when things feel like they are "running fine", but this just cranks up the actual risk for when something does finally break.
The faster we patch, which implies higher frequency with smaller changes; and likewise the more often we reboot; the smaller the potential impact and the easier to fix. Patching and reboots must happen, eventually. The longer we avoid them, the scarier they seem because they are indeed, scarier.
We often feel that through testing we can verify that patches will be safe to apply. Or we hope that this sounds reasonable to management. I mean really, who doesn't like testing? But it is rather like being on the Starship Enterprise and there are phasers being fired at you right now - and Commander Data has proposed that by modulating the shields that you might be able to block the phaser attack. Do you "just do it"? Or do you commission a study to be done that will take a few days or weeks and might be pushed aside as something else pressing comes along? Of course you modulate the shields that very second, because every second the threat of destruction is very real and the shield change is your hope in deflecting it for the moment. Patches are much like that, they might be a mistake, but the bigger mistake is in delaying on them.
In a perfect world of course we would test patches in our environment. We would have teams of testers to work around the clock and test every patch the instant that it arrives in giant environments that exactly mirror our production. And in the biggest, most serious IT shops this is exactly what they do. But short of that, the risks are just too high.
In today's world we can snapshot and roll back patches so easily that the threats from bad patches are normally trivial. And it is not like the vendors have not already tested the patches. These are not beta releases, these are already tested in environments much larger and more demanding than our own.
At some point we simply must accept the reality that we must depend on and trust our vendors more than we distrust them. The emotional response of just waving off security patches because "the vendor gets it wrong to often" isn't reasonable, and if it were it should make us question why we depend on a vendor we trust so little. That is a situation that must be fixed.
But we depend on our vendors for security fixes, we have to. If we don't work with them as a team, then divided we fall. Malware vendors prey on businesses that don't trust their vendors and have become very successful at it.
-
@scottalanmiller said in Patch Fast:
In today's world we can snapshot and roll back patches so easily that the threats from bad patches are normally trivial. And it is not like the vendors have not already tested the patches. These are not beta releases, these are already tested in environments much larger and more demanding than our own.
And yet still, mistakes can happen. Two of our vendors here have had to recall patches because they caused more problems than they fixed (can't fuss at Microsoft... this time)... and they were released for days before the recalls happened.
But as you say, this is the reason we should have snapshots and backups to recover from said mistakes and bad patches. There is no real reason for businesses of any size to not be able to backup (at bare minimum) and / or snapshot their systems before running patches.
-
@dafyre said in Patch Fast:
@scottalanmiller said in Patch Fast:
In today's world we can snapshot and roll back patches so easily that the threats from bad patches are normally trivial. And it is not like the vendors have not already tested the patches. These are not beta releases, these are already tested in environments much larger and more demanding than our own.
And yet still, mistakes can happen. Two of our vendors here have had to recall patches because they caused more problems than they fixed (can't fuss at Microsoft... this time)... and they were released for days before the recalls happened.
But as you say, this is the reason we should have snapshots and backups to recover from said mistakes and bad patches. There is no real reason for businesses of any size to not be able to backup (at bare minimum) and / or snapshot their systems before running patches.
Yes exactly, the days of painful patching are behind us. Patching always has risk, but planned risk with great mitigation. But the risks of not patching are continuing to grow at quite a pace.
-
@dafyre said in Patch Fast:
There is no real reason for businesses of any size to not be able to backup (at bare minimum) and / or snapshot their systems before running patches.
Who here snapshots their systems before patching their Microsoft servers? Scott says it's so easy to snapshot and roll back, so perhaps I'm missing a trick here? I can see that it's easy if you're manually installing patches, but who does that?
The other problem is that you may not realise that a patch has broken something for a couple of days, and by then it's likely to be too late to satisfactorily restore from backup.
-
@Carnival-Boy said in Patch Fast:
@dafyre said in Patch Fast:
There is no real reason for businesses of any size to not be able to backup (at bare minimum) and / or snapshot their systems before running patches.
Who here snapshots their systems before patching their Microsoft servers? Scott says it's so easy to snapshot and roll back, so perhaps I'm missing a trick here? I can see that it's easy if you're manually installing patches, but who does that?
The other problem is that you may not realise that a patch has broken something for a couple of days, and by then it's likely to be too late to satisfactorily restore from backup.
We schedule our snapshots here (VMware) to run an hour before our patch time... and we do the patches manually.
-
Tell me more. How often do you patch? Does the same person do it? When do you do it, Sundays? How do you to check that server applications aren't getting broken?
I need to get more organised and am looking for best practice.
-
@Carnival-Boy said in Patch Fast:
Tell me more. How often do you patch? Does the same person do it? When do you do it, Sundays? How do you to check that server applications aren't getting broken?
I need to get more organised and am looking for best practice.
I don't know about "best practices" but what we do here...
Every SysAdmin has a list of systems they are responsible for. So the systems we are responsible for are also the ones we patch. We have a daily maintenance Window from 6am to 7am for patches and software upgrades and such.
-
That's ok at a larger organisation, but trickier at a smaller one where there's only one or two IT staff, or they use an MSP. Having a maintenance window during the week is nice though.
-
@Carnival-Boy said in Patch Fast:
Tell me more. How often do you patch? Does the same person do it? When do you do it, Sundays? How do you to check that server applications aren't getting broken?
I need to get more organised and am looking for best practice.
We patch every six hours with a randomizer to keep patching from pounding our WAN. So each server has a few hours of randomization, but update four times a day. We don't snap before patching, because we use primarily Linux and the risks are effectively zero because patches are better tested, patch footprint is smaller, the patching events are smaller (four times a day, not one time a week) and patch rollbacks are trivial.
-
@Carnival-Boy said in Patch Fast:
That's ok at a larger organisation, but trickier at a smaller one where there's only one or two IT staff, or they use an MSP. Having a maintenance window during the week is nice though.
If you use an MSP it would be simple. Just tell your MSP what patch process you want
-
@Carnival-Boy Patches are applied with yum-cron or dnf-automatic. Snapshots are taken before any system changes, and after testing is completed, but not before or after patching.
-
-
Can't edit the last link due to wifi issues. But here is the real link...
http://www.sccmog.com/sccm-powercli-auto-snapshot-before-patching-task-sequence-script/
-
Never used this but take a look...