Blog Layout

EDR Experts Insiders Edition III: How we Choose our Partners

Boris Vasilev • 14 June 2024

In this blog post we'll explore how we choose our partners.

In our first two posts we explored why we exist, what problems we aim to solve and we spoke about our branding. This was a cool way to say "Hi, this is us, this is what we do". This post however, will get a bit more technical.

Introduction

We'll assume that you aren't a tech expert, and we'll start with a quick intro to cyber-security.

Securing information starts with one basic paradigm - the CIA triad. This is confidentiality, integrity and availability of information.

Confidentiality refers to protecting information from unauthorized access.

Integrity means data are trustworthy, complete, and have not been accidentally altered or modified by an unauthorized user.

Availability means data are accessible when you need them.

Cyber-attacks aim to compromise one or more key elements of this triad and must be blocked -- the earlier in their stage, the better.


What are the most common types of threats?

Whilst security threats take many shapes and forms, few are highly prominent:

  1. Phishing
  2. Malware
  3. Targeted attacks that essentially deliver one of the above


Who is affected by these threats?

Whilst targeted attacks (we have described them in greater detail here) target businesses, phishing and malware remain a large problem for everyone who has internet access.


So how are these dealt with?

Malware detection, prevention and blocking

The field of malware detection and blocking is highly innovative. As attackers work round-the-clock to evade defences, security software vendors constantly invent new detection methods to tackle the challenge.

There are many technologies readily available. For example, if we open Symantec STAR (Security Technologies and Research), we can get a basic idea of many of them.

No matter the vendor, malware-blocking technologies can be divided into 2 major groups:

  • Pre-execution -- this includes signatures, generic signatures, heuristics, and potentially Yara rules, static analysis, reputation-based (hash-based) detections. Pre-execution analysis also features dynamic emulators that create "computer inside the computer" where instructions of interest are executed and their safety is determined.
  • Post-execution -- this entails behavioural blocking (for example Adobe Reader is not allowed to create executables in C:/Windows/System32 as it doesn't need to), behavioral analysis (recording, classifying and if necessary, undoing actions). Less frequently nowadays, it may include Host Intrusion Prevention Systems (a set of rules that may trigger a warning or block).

Now for the less prominent methods:

  • Traffic blocking/control methods that aim to block dodgy websites from loading. This can block both malware from ever being downloaded, or if it has already been executed, can prevent information exfiltration or secondary payloads (additional malware from being deployed).
  • Memory analysis methods -- these look at the memory, where malware will have revealed its true intentions.
  • Cloud sandboxing/detonation -- this method aims to trigger an object in question's true behaviour, far away, on a virtual system. Before a real system could be harmed. This is one of the most expensive approaches and we'll outline it in great details further on in this post.

Pros and challenges of each

Pre-execution:

Pros of signatures:

Signatures are code snippets that either a malware analyst or an automated system has generated.

  • Provide quick detection, with least performance impact.
  • Accurate, low false positives
  • May provide threat name, which may be useful to system admins to determine the right course of actions.
  • Work offline


Challenges of signatures:

  • Rather reactive approach that will not help in the event of 0-days (new malware)
  • Database growing bigger and bigger, upon updating, there is high disk activity as the whole database is copied (should vendor need to revert in case of an issue)
  • Attackers can easily evade signatures by performing very simple code refactoring.
  • Useless on obfuscated malware, such as scripts


Pros of generic signatures

Generic signatures are code snippets that have been found in few or many variants of the same malware. Automated system or malware analysts have determined that attackers are unlikely to change certain portion of the code and they use the same to identify malware.

  • Higher success rate than normal signatures
  • Smaller, less increase of database


Cons of generic signatures:

  • Higher performance impact compared to standard signatures
  • Higher false positives compared to standard signatures
  • Even more reactive approach than standard signatures, as more than 1 variant of particular threat must be observed.
  • Useless on obfuscated malware, such as scripts



Pros of static analysis:

Now static analysis is a complicated subject. An operating system is like a Lego, containing various APIs, functions and methods software developers can make use of, instead of programming everything from the scratch. For example, should a developer need to create a file, they would not need to instruct the hard disk to do so, as Windows already provides the CreateFileA function (fileapi.h). A developer simply has to import it, call it and provide the needed parameters (where the file will be written, etc.) Once software has been compiled (created), the executable file contains certain portions, structures and characteristics.


Static analysis attempts to extract the imports, structural and other characteristics of a file. The extracted information is then fed to machine learning, including deep learning algorithms, often trained with millions or even billions of safe and malicious files. There are many resources on static analysis, we like this SentinelOne blog post.


  • Static analysis is much more proactive approach than signatures, it can identify even threats that haven't yet been created
  • Performance impact on device is smaller compared to signatures and heuristics


Cons of static analysis:

  • Inefficient when dealing with packers as the extractor engine will most likely be unable to read the imports.
  • Oftentimes, just executables and documents are covered (sometimes not even documents) which means other methods must be used to block malicious documents and scripts.
  • Most of the time, it is cloud-assisted, so an internet connection will be needed for SA to operate.


Pros of Dynamic analysis and Heuristics (partial and local emulation):

Dynamic analysis refers to running portions of a file in a virtual, local environment. The file reveals its true shape (almost). Various methods are then used for the actions happening inside the virtual machine to be classified.

  • Very efficient in dealing with packers, as the final payload may be unpacked and scanned
  • Very efficient in dealing with all forms of obfuscated malware, including scripts
  • Very efficient in detecting many new threats, especially new mutations of what's already known


Cons of Dynamic analysis and Heuristics (partial and local emulation):

  • The emulator doesn't have all day -- it only has milliseconds and very limited resources to take a decision. This opens door for evasion by attackers with relatively unimpressive skill set.
  • Additional false positive mitigation, such as cloud verification recommended, to reduce false positives.


Post-execution protections

Once a file is executed and a process is now launched, post-execution protections outright block certain actions which could only be associated with malware. They also analyse all actions performed by the process (created/deleted/modified files, services/registry entries, as well as memory operations. Actions are tracked, correlated and classified using a set of rules, as well as machine learning.


Post-execution protections are more difficult to evade (not impossible) than all pre-execution protections and this is a major pro.

However, actions are already happening. By the time the system reacts, irreversible damage, such as passwords and cookies snatched from browser and already sent to attackers. Some ransomware samples encrypt data exceptionally fast and a security solution may not have managed to create a copy of everything. Not all encrypted files may be reversed.

Another con is that behavioral anlysis to a large extent relies on "hooks" (modules attached to every process that allow behavior to be reported). Attackers can unhook processes and can thus, evade behavioural monitoring.

Lastly, there is a performance challenge, in order for the behavioral monitoring not to put excessive strain on the device, certain executables (for example Microsoft-signed), actions, and system calls may need to be whitelisted. Attackers with more impressive skill sets, through trial and error, would discover that and will use it to their benefit.

Memory analysis

Memory analysis refers to checking the code in RAM (random access memory) for threats using various techniques. All evasion layers will be peeled, which leaves the analyzer with a "naked" form of the file in question.

This method has a lot higher succession rate than all other technologies but is very performance-expensive and hence, not widely deployed.

Cloud sandboxing (detonation)

A file in question is uploaded to a server and ran in a virtual container. A multitude of engines perform pre-execution, post-execution, memory and traffic analysis. These engines perform deep and resource-intensive analysis that may take days, even months to be completed locally -- in a matter of few minutes. Additional care is being taken for attackers not to "smell" the virtual environment. They cover a broad range of files, including maldocs and scripts.


Sandboxes can detect malware that is simply impossible to detect locally and can detect additional dangers, such as documents containing phishing-like text or dangerous links. Sandboxes are a true weapon against sophisticated 0-day attacks. Unfortunately, they are not offered by many vendors. Cons include waiting a few minutes for a file emulation to complete and not being able to handle password-protected archives, as well as files above certain size. Some of these cons can be mitigated. For example, we can block the download of files where emulation has failed due to password or exceeded size.

So, what do we look for in our partners:

As we saw above, there is no single perfect method. Our partners must utilize all available methods, including the most expensive and sophisticated approach -- cloud sandboxing. This way, our customers can be confident that the solutions offered on our website are fully equipped to deal with sophisticated threats of today and tomorrow.

by Boris Vasilev 15 June 2024
You've already purchased a managed service? Thanks! Here, we'll have a look what happens next.
by Boris Vasilev 12 June 2024
In the second edition of our Insiders Edition blog, we'll talk about our branding and user experience.
by Boris Vasilev 1 June 2024
Welcome to EDR Experts. In our first blog post (of a series), we'll discuss why we exist and what problems we aim to solve. Stay tuned for more.
Share by: