When is a Patch Not a Patch?
In my "day job" at Tenable, we think about vulnerability management a lot, it is what we do. We also think about patching and patch management a lot, even though that is not what we do. (I often wish companies who sell patching and patch management systems were similarly honest about their core competencies, but that's a rant for another day- it is not quite floor wax and dessert topping territory, but patch and vulnerability management are two related things I do not want coming out of a single can, no matter how shiny or tasty they claim to be).
Back to the topic, patching... and not patching. Patch Tuesday has driven many into a myopic patch mentality, sometimes that works well, sometimes it works well enough, and sometimes is leads to stupidity. (Tangent number two: I was always a fan of Shavlik, I don't know what VMware was thinking when they acquired and nearly ruined them, but thankfully Shavlik has survived, escaped, and will hopefully recover fully). But patching isn't always the answer; when a vulnerability is found there should be a logical process for dealing with it, and while "slap a patch on that bad boy" is often a great answer, and frequently the easiest answer, it is not the only answer.
Let's say you've found a vulnerability (or more likely thousands) in your environment, where do you start to deal with it? There are a handful of questions you need to answer before acting. In no particular order:
Is it real? I wrote a post on positives and negatives, true and false, some time ago- check out Are you Positive? for thoughts on the topic. The bottom line is that you need confidence in your findings. Acting on bad info is rarely a good idea unless you are a politician.
Are the "vulnerable" systems exposed? We don't always think about online "exposure" the way we should. We generally understand threats that come to us, whether in the form of physical threats to our homes and offices, or services exposed to the Internet. In the physical world, we generally only think of going to threatening places in "high-risk" environments, such as high-crime areas or potentially dangerous places such as mountain trails or beaches known for undertow. The problem with that is that the entire Internet is pretty sketchy, not just the "high-crime" areas. Legitimate sites are compromised, DNS is hijacked, bad things happen all over- so venturing out is always a little risky. Any system receiving email or accessing the Internet has some exposure. Where it gets more tricky is with the indirect exposures- systems which are exposed via pivot or relay. This often means systems which are not directly exposed to the Internet, but which are exposed to Internet-accessing systems. This sort of attack path analysis can be challenging, but it does add context to our efforts at understanding exposures and mitigating vulnerabilities. (Forgive me for not addressing air-gapped systems here, but you will note I am not addressing unicorns, either).
Do we care?
Should we care?
If so, how much?
Do the vulnerabilities really expose anything important?
How much exposure are you comfortable with?
What risks are posed by potential exploit of the vulnerability?
What risks are posed by the patch or mitigation?
Does the cost of mitigating the vulnerability make sense? Spending a dollar to protect a dime is probably not the best use of limited resources.
Are there known exploits in the wild for the vulnerability? There may be unknown exploits, but ignore known Bad Things(TM) at your own risk.
Is a patch the best answer? Maybe you should just uninstall or disable the application or service. If you don't need it, kill it. Maybe there are other mitigations like network segmentation or other ACLs, configuration settings, permissions restrictions, or tools like Microsoft's EMET which can reduce or eliminate the exposure. This requires an understanding of the implications of each mitigation- sometimes it is easiest to "just patch", but patching is not without risks.
Can you recover quickly from whatever mitigation you deploy? Sometimes unwinding a bad patch is as simple as logging into your patch or systems management server and removing the patch. Sometimes it involves re-imaging thousands of systems. If faced with the latter, how would you handle it (besides updating your resume)?
I'm sure you can think of more, but this list should start or re-start a conversation I hope you've already had several times.
I can't write about patching without addressing a little problem I thought was pretty much behind us, at least for Microsoft: bad patches. For years I have advocated rapid patching of Microsoft systems since they have done an outstanding job of QA on their updates. Back in the days when I was an admin in the trenches I patched fast, with a 72-hour patch target for desktops and laptops, and a 10-day target for most servers. Obviously, some testing is needed, and a lot of testing is needed for critical systems- but you have to decide if the risk of deploying a patch outweighs the risk of not patching, and how other possible mitigations might change the risk. This has been made a little trickier by the past year's string of "less than perfect" patches coming from Redmond. I chatted about this topic with Pat Gray on a recent episode of his outstanding Risky Business podcast. Microsoft updates are the largest software distribution system in the world, and the quality of the patches is still generally very good. "Generally very good" might be good enough to push patches to a lot of systems in a rolling deployment after a short test cycle, it is probably not good enough to skip thorough testing before testing on critical systems.
In the immortal words of Spock: "Patch well and prosper".
Or something like that.
Published by HT Syndication with permission from The CTO Forum.
Copyright HT Media Ltd. Provided by Syndigate.info , an Albawaba.com company