We'll assume that you aren't a tech expert, and we'll start with a quick intro to cyber-security.
Securing information starts with one basic paradigm - the CIA triad. This is confidentiality, integrity and availability of information.
Confidentiality refers to protecting information from unauthorized access.
Integrity means data are trustworthy, complete, and have not been accidentally altered or modified by an unauthorized user.
Availability means data are accessible when you need them.
Cyber-attacks aim to compromise one or more key elements of this triad and must be blocked -- the earlier in their stage, the better.
Whilst security threats take many shapes and forms, few are highly prominent:
Whilst targeted attacks (we have described them in greater detail here) target businesses, phishing and malware remain a large problem for everyone who has internet access.
The field of malware detection and blocking is highly innovative. As attackers work round-the-clock to evade defences, security software vendors constantly invent new detection methods to tackle the challenge.
There are many technologies readily available. For example, if we open Symantec STAR (Security Technologies and Research), we can get a basic idea of many of them.
Signatures are code snippets that either a malware analyst or an automated system has generated.
Generic signatures are code snippets that have been found in few or many variants of the same malware. Automated system or malware analysts have determined that attackers are unlikely to change certain portion of the code and they use the same to identify malware.
Now static analysis is a complicated subject. An operating system is like a Lego, containing various APIs, functions and methods software developers can make use of, instead of programming everything from the scratch. For example, should a developer need to create a file, they would not need to instruct the hard disk to do so, as Windows already provides the CreateFileA function (fileapi.h). A developer simply has to import it, call it and provide the needed parameters (where the file will be written, etc.) Once software has been compiled (created), the executable file contains certain portions, structures and characteristics.
Static analysis attempts to extract the imports, structural and other characteristics of a file. The extracted information is then fed to machine learning, including deep learning algorithms, often trained with millions or even billions of safe and malicious files. There are many resources on static analysis, we like this SentinelOne blog post.
Dynamic analysis refers to running portions of a file in a virtual, local environment. The file reveals its true shape (almost). Various methods are then used for the actions happening inside the virtual machine to be classified.
Once a file is executed and a process is now launched, post-execution protections outright block certain actions which could only be associated with malware. They also analyse all actions performed by the process (created/deleted/modified files, services/registry entries, as well as memory operations. Actions are tracked, correlated and classified using a set of rules, as well as machine learning.
Post-execution protections are more difficult to evade (not impossible) than all pre-execution protections and this is a major pro.
However, actions are already happening. By the time the system reacts, irreversible damage, such as passwords and cookies snatched from browser and already sent to attackers. Some ransomware samples encrypt data exceptionally fast and a security solution may not have managed to create a copy of everything. Not all encrypted files may be reversed.
Another con is that behavioral anlysis to a large extent relies on "hooks" (modules attached to every process that allow behavior to be reported). Attackers can unhook processes and can thus, evade behavioural monitoring.
Lastly, there is a performance challenge, in order for the behavioral monitoring not to put excessive strain on the device, certain executables (for example Microsoft-signed), actions, and system calls may need to be whitelisted. Attackers with more impressive skill sets, through trial and error, would discover that and will use it to their benefit.
Memory analysis refers to checking the code in RAM (random access memory) for threats using various techniques. All evasion layers will be peeled, which leaves the analyzer with a "naked" form of the file in question.
This method has a lot higher succession rate than all other technologies but is very performance-expensive and hence, not widely deployed.
A file in question is uploaded to a server and ran in a virtual container. A multitude of engines perform pre-execution, post-execution, memory and traffic analysis. These engines perform deep and resource-intensive analysis that may take days, even months to be completed locally -- in a matter of few minutes. Additional care is being taken for attackers not to "smell" the virtual environment. They cover a broad range of files, including maldocs and scripts.
Sandboxes can detect malware that is simply impossible to detect locally and can detect additional dangers, such as documents containing phishing-like text or dangerous links. Sandboxes are a true weapon against sophisticated 0-day attacks. Unfortunately, they are not offered by many vendors. Cons include waiting a few minutes for a file emulation to complete and not being able to handle password-protected archives, as well as files above certain size. Some of these cons can be mitigated. For example, we can block the download of files where emulation has failed due to password or exceeded size.
EDR Experts Limited
VAT Number:
About
Contact
Privacy Policy
Terms of Service
hello@edr-experts.co.uk
About Check Point
Harmony Products
Technologies