14/08/2018

Deeplocker: Interesting, but not yet a threat

Deeplocker: Interesting, but not yet a threat Malware

At the Blackhat conference in Las Vegas, malware researchers from IBM presented DeepLocker, a malware that opens up new possibilities for protecting malicious files against detection using artificial intelligence. With most AI methods (e.g. Neural Networks, Support Vector Machines, RainForests) it is difficult to understand how they derive decisions and which actions are triggered by them. This is a problem for AI researchers. And it is also a problem for malware analysts, because the program logic is no longer visible by analysing the code. The lecture shows that the arsenal of analytical techniques needs to be expanded. It also demonstrates that new AI-based criteria such as face recognition or speaker verification are available to attackers to identify the right target system. This makes the contribution [PDF] of IBM researchers very valuable.

On the other hand, the effects of these innovations must also be put into perspective. It's about malware evading analysis, which is not a new phenomenon by any stretch. In fact, this is what has been happening for more than 30 years and the arsenal of obfuscation and self-protection procedures is extensive. The recognition technologies of IT security solutions have improved over years.

"The signatures of the virus scanner are usually based on the code of the malicious routines. The use of AI procedures may lead to problems. However, it is possible to recognize the AI-based procedures and to create signatures for them. Modern security solutions also increasingly rely on behavior-based detection methods that would easily recognize Deeplocker," says Ralf Benzmüller, Executive Speaker of G DATA-Security Labs. Therefore, security solutions such as G DATA Total Security can also protect against such novel threats.

WannaCry with face recognition

Specifically, IBM demonstrated a variant of WannaCry-Ransomware that paralyzed corporate networks worldwide last year. In this example the ransomware only activated if a face recognition software integrated in the software identified a specific person. This could be, for example, the CEO of a company. "DeepLocker does not change the behaviour of the file in the system. Even if we can not fully understand the decision processes in the malware file as to whether or not a computer should be infected, our behavior-based detection would have detected and prevented the infection with WannaCry or other ransomware," says Benzmüller.

The G DATA Behaviour Blocker checks whether certain suspicious actions occur on the computer. In the case of ransomware, for example, the software can detect when a process deletes a large number of shadow copies that can be used to recover deleted data. At the latest when a process suddenly starts to encrypt a lot of data without prior user input, the software would abort the process or, in case of doubt, ask the user whether he/she currently wants to encrypt data.

Obfuscation of malware is not fundamentally new

In the future, it will be necessary to closely monitor whether malware uses artificial intelligence methods to attempt to conceal its own activity. There are currently established ways, which lead to the same result - as for example with different packers, which cannot be analyzed easily by virus protection programs. Another strain of obfuscation are self-created script languages, as the recently discovered Dosfuscation sample shows. It remains to be seen whether malware authors will actually rely on the new procedures.

It is laudable of IBM to demonstrate such novel technologies at a security conference like the Blackhat. However, the procedure shown is not a fundamental problem for the security industry.

"We are closely monitoring the situation and are well equipped," says Benzmüller. "DeepLocker and similar AI malware still uses files and libraries we can detect." Indicators for detection could be the calling of certain functions from machine learning libraries, or processes still to be defined with unique indicators that are started from a suspicious file. "Should such malware actually appear in the wild, we can also detect it with customized signatures or new behaviour-based rules. In the G DATA SecurityLabs we have been using Machine Learning and Artificial Intelligence for years to detect harmful files. But you don't need a new AI engine to fend off AI-based attacks."