Eight ways your patch management policy is broken

Mistakes get in the way of effective risk mitigation, here's how to fix them

Print

PrintPrint
Pro

Read More:

3 October 2019 | 0

Not appropriately patching your software and devices has been a top reason why organisations are compromised for three decades. In some years, a single unpatched application like Sun Java was responsible for 90% of all cybersecurity incidents. Unpatched software clearly needs to be mitigated effectively.

So, it is surprising to see that most organisations do not effectively do patch management even though they think they do. Here are some of the common ways patch management policy is broken.

1. Not patching the right things

The number one patching problem is not patching the highest risk applications first. You will find hundreds to thousands of things that need patching in almost any environment, but a handful of software program types are by far attacked the most. Those need to be patched first, best and quickest.

On client workstations, the following four types of software are attacked the most:

  • Internet browser add-ins
  • Internet browsers
  • Operating systems
  • Productivity applications (e.g. Office applications)

On servers, the following types of software are attacked the
most:

  • Web server software
  • Database server software
  • Operating system
  • Remote server management software

These classes of software make up less than 5% of all software vulnerabilities, but more importantly, unless there is an active exploit in the wild, you do not have to worry about it. Decades of data has shown that unless there is public exploit code “in-the-wild”, then it is unlikely the vulnerability will be exploited. Only about 2% of all publicly announced vulnerabilities end up in the wild

Solution: Patch the software most likely to be exploited software first, best and quickest.

2. Too focused on patch rate

I have rarely visited a customer site (and I have visited hundreds) that did not tell me that they have some incredible patching rate, like 99%. I have never visited a customer site that had a single device fully patched and I have never scanned a device that did not contain a critical vulnerability. Why the big disconnect?

What a “99% patch rate” usually means is that they are patching 99% of Microsoft applications on most of their devices, and even that is rarely true. If I check to see if they have any vulnerable remote management software or vulnerable versions of internet browser add-in programs, the answer is usually yes. Sometimes I will find five different versions of the same program and none of them are correctly patched.

More importantly, the 1% that is not patched represents the highest risk vulnerabilities. Does saying you have a 99% patch rate mean anything for overall security risk if you have nearly a 0% patch rate on the stuff that is most likely to be exploited? No, and yet that scenario accurately describes what I see in most environments.

Solution: Do not worry about reporting overall patching success rates. Tell me how well you patch the vulnerabilities most likely to be exploited.

3. Not patching fast enough

All compliance guides say to patch critical vulnerabilities in a timely manner, whatever that means. What is should mean is that you patch them within a couple of days and at most a week. I understand the need for many people to wait a day or three to see if a just released patch has some serious bug in it, but I run across organisations with written policies to patch within a month. That is crazy.

In a day when the latest patches are used to create wormable exploits in minutes of the patch’s release, you cannot wait a month for a patch of a critical component, especially one of the most attacked components. If you use “inline patches” to stop threats trying to exploit those unpatched vulnerabilities, the signatures should be deployed immediately.

Solution: Patch the components most likely to be attacked within a week.

4. Not clear who’s responsible for patching

It is the rare organisation where one person or team is responsible for all patching. Usually, one person or team is responsible for patching a large part of it, but someone else is responsible to patch devices, another responsible to patch application servers, another responsible to patch database servers, and so on.

I rarely find an organisation that is not missing lots of patches across lots of their computers. When I ask why is happening, they start pointing fingers. “I’m in charge of user workstations, but not the servers,” or “I’m not allowed to touch such-and-such servers,” or the “DNS administrators have decided not to patch that right now because it breaks yada-yada.” The excuses fly as fast as the finger pointing. The only problem is that you have got lots of unpatched things with no one taking responsibility to patch.

Solution: Make one person/department solely responsible for all patching.

5. Patches not tested before deploying

Yes, patching will break some things. That is no reason not to patch quickly, but anyone who has rolled out a patch to have it crash the device it was deployed on is forever burned. No one gets a raise for crashing a server even if it was due to installing a security patch. So, test. And by “test” I mean do something…anything.

The conventional wisdom is that all patches should be tested across broad swathes of different types of devices and configurations before testing. Only after thorough and complete testing are patches allowed to be deployed. That is great, if you actually do it.

Most companies deploy patches without a single bit of patching. That is just setting yourself up for a critical failure on the day you need it least. Instead of making patch testing a binary thing (i.e. either do or don’t do it), do at least some testing before a wide-scale roll-out.

Define ahead of time which of your non-critical servers, user workstations and devices will be your full-time guinea pigs and then use them when it comes time to roll out the patches. Roll out the patches to your production test servers and users the day or two after they come out. Wait a day or two to see if they cause any problems, and if not, then deploy more widely but not everything else at once. Do production deployments in multi-day waves, but quick enough that you get everything deployed in a week. Start small and then spread out.

Again, do not make testing a binary choice. If you cannot do it complete and right, at least do some testing. Have a good plan to back out of the patches in case one of them causes big problems.

Solution: Test patches before doing wide-scale production deployments and have a back-out plan in case a patch causes problems.

6. Patch management team has no authority

Every good patch management leader I talk to complains about having all the responsibility (if something successfully attacks something they have not patched yet) and none of the authority to force stakeholders of devices to do patching properly. For instance, when unpatched Sun/Oracle Java was responsible for 90% of all successful web exploits, most patch managers told me they could not patch it because doing so broke too many legitimate programs. That, paired with the fact that Java was also the most popularly installed program after the operating system, led hackers to target it the most.

It is not acceptable to do nothing when you find an unpatched critical software program with public exploit code in the wild. You can do a lot of things but doing nothing is not one of them. Anytime I have heard of a program breaking because of a new patch, that has almost always been because the programmer did something they were not supposed to. Make sure your developers (or your vendor’s developers) are not being lazy and causing you patch management issues.

If you cannot patch a program, consider doing the following:

  • Removing it if not needed
  • Removing unpatched device off network or strongly isolating if possible
  • Using software to block any potential threats that can exploit unpatched vulnerability

Solution: Take alternative actions to mitigate risk when you are prevented from deploying a patch.

7. Vulnerabilities patched once and forgotten

Patching is not an install-and-forget-it problem. Patch management is not about buying a product that claims to patch everything every time perfectly. That patch management product does not exist. Patch management is about effective risk management and keeping a pulse on what is and is not being exploited in the wild.

Solution: Put a sophisticated risk manager in charge of your patch management program.

8. Patch managers’ incentives misaligned

Lastly, most patch management leaders, if specifically incentivised for how well they do patch management at all, are ranked on what percentage of all software programs they patch in a timely manner. I can tell you the answer. It is 99%. It is always 99%, and that 99% says nothing of your true risk management profile.

Instead, incentivise patch managers by how well and quickly the patch the most attacked programs. If the number of unpatched software programs exploited in a given time period in your environment goes down versus a previous time period and no critical attacks have taken place because of unpatched software, that should be considered success. I want to salute that patch management leader, because everything else is just lying using statistics.

Solution: Make sure a patch manager’s incentives are aligned with true risk reduction and not an arbitrary overall patching percentage.

Patch management is all about risk management. By following these recommendations, you can decrease cybersecurity risk by patching the right stuff better and faster.

IDG News Service

Read More:



Comments are closed.

Back to Top ↑