The “Zero Day” Conundrum

By on May 19, 2020

In my last blog I talked about how we should define “zero day” and the many misuses which in my view muddy the waters, making it ever more difficult to address the actual problem. In case you missed that one you can read it here, or you can simply accept the premise that zero day threats are very rare.

Either way, I absolutely accept that we shouldn’t stop worrying about zero-day threats completely, but I do think we should put them into context and focus our resources accordingly. The VAST majority of malware we face is not zero day and is in fact exploiting vulnerabilities that are known about and have available patches to render them ineffective.

Focusing your resources accordingly will, of course, depends on how many resources you have and how skilled they are, so you should adapt the below list to fit your situation, but as a general rule here’s how I would prioritize things:

Application of new patches is a key part of your defense.

There are times (and places) where patching cannot be implemented immediately, but if you can patch you should do so – and do so as fast as possible.

Don’t overlook the importance of user education.

A significant proportion of malware is delivered via phishing attacks and better training would reduce the number of times these links are activated, resulting in fewer attacks that your other defenses need to identify and block (and of course fewer they can potentially miss).

Defense in depth is the only sensible approach.

As malware creators become more adept at tricking users and technology alike, putting all your eggs in one basket seems to be a more and more outdated approach. An endpoint security solution that leverages multiple methods of protection has a higher chance of being effective.

      • The fastest and least resource intensive method of catching malware is via signatures. It may sound a bit ‘last century’ and it’s true that signatures can only catch malware that has already been seen elsewhere (and you may even be thinking about that 725,000 number I mentioned in my previous blog) but keep in mind that even if this method won’t catch the very newest malware it’s still the best way to identify the other 925 million malware files in the database. Filtering out the known bad files without overtaxing your PC allows more CPU cycles for other tasks – namely, the machine learning methods in the next 2 points but even more importantly, running the applications you actually turned the PC on for!
      • Once you have filtered out a large proportion of the known bad malware, you need to think about how to protect against the small percentage for which signatures do not exist – and that brings us back to the previously mentioned machine learning. To understand more about machine learning in general try reading this blog written by one of my colleagues. But, for the purposes of this blog, and in the context of anti-malware, machine learning comes in 2 fundamental flavors; Pre-execution machine learning and Post-execution machine learning (or machine-learning-assisted behavioral analysis). The Pre-execution flavor does exactly what it says on the tin…it examines a file before it allows it to execute and tests it against a malware model. If it deems the file is statistically likely to be malware then it blocks it – critically before it is allowed to execute and therefore before it can do any damage whatsoever. Post-execution machine learning requires the file to execute and then instead of examining the static file for indications of maliciousness it watches the actual behavior of the process(es). Once this trace data is available it can be compared to a different type of model and again an estimation can be made as to whether it is likely to be malicious or not. Both variants have value and complement each other so just like you should be looking at signature and non-signature based detection so you should be looking at pre- and post-execution versions of machine learning.
      • The scenario outlined above where post-execution scanning is required (because the pre-execution scan has not identified something that is malicious and allows it to execute) is a neat segue into the value of application containment. This concept, delivered by McAfee back in 2017, is designed to minimize the damage that a malicious process can achieve even when the endpoint defenses have been deceived. This is of particular importance given the current prevalence of crypto-malware, whereby even if the behavioral analysis does identify malware running on your system it may have already deleted restore points, encrypted data and overwritten source files. Dynamic Application Containment will identify if a file has no reputation and if it doesn’t (so it is not known as either good or bad), and assuming the pre-execution scanning suggests it is safe, will allow it to execute but will contain it at the same time. This means that should it subsequently turn out to be malicious it will have been prevented from, for example, overwriting user data, modifying the registry, accessing network shares and a whole host of other things we wouldn’t want the bad guys to do.

Application Control

  • In some environments you can also consider application control as a way of ensuring that malicious code cannot take advantage of any vulnerabilities that may exist. This is a whitelist type approach that allows the IT department to predefine a set of applications and code that is allowed to execute and simply blocks everything else. Since malicious code won’t be on the whitelist it can never run, so why isn’t this the default position of every enterprise worldwide? Answer: because it tends to result in the IT department being overrun with irate users trying to get the job done and discovering that only a limited set of applications will work. One person is a lover of a different web browser, but they’re blocked from using that application. Another person has a personal device and wants to load the relevant software to connect it, but they can’t. A third person wants to use their great new presentation clicker only to find it needs its own software – and that’s not allowed. The list goes on….and the users get more irate.

Endpoint Protection

  • Some endpoint security solutions are better than others, but it is worth keeping in mind that there has never been, nor will there ever be one that is 100% perfect. No vendor can guarantee that malware or hackers will never find their way into your environment and that is why there is a burgeoning market for Endpoint Detection and Response (EDR) solutions. These solutions are designed to continually analyze and interrogate your infrastructure to detect low-level malicious activity that is going unidentified by other defense systems, and then enable you to react to those threats, isolate infected systems, remediate the damage and restore normal service as quickly as possible.

In summary, zero day threats are a pretty long way down the list of things I think companies should be focusing on addressing. That’s not to say they should be ignored, but there are easier things to fix and which are likely to have a more positive impact. My advice would be to hold off until patching is fully under control, the users know how to act as your first line of defense and you have a good quality antimalware solution (leveraging both signatures and machine learning) installed. Only then should you be turning your attention to the dangers of the ‘zero day’!

About the Author

Dave Messett

Dave is Head of Product and Solutions Marketing, EMEA at McAfee. A seasoned IT professional with over 20 years of experience across technical and marketing positions he is highly sought after for his insights on the world of cybersecurity and the measures companies can take to defeat the bad guys. He works with analysts, partners ...

Read more posts from Dave Messett

Categories: Enterprise

Subscribe to McAfee Securing Tomorrow Blogs