Big CodeLearning from open source
In our Machine Learning (ML) for secure code series the mantra has always been the same: to figure out how to leverage the power of ML to detect security vulnerabilities in source code, regardless of the technique, be it deep learning, graph mining, natural language processing, or anomaly detection.
In this article we present a new player in the field: DeepCode, a system that has exactly this purpose, combining ML with data flow analysis, namely in the form of taint analysis.
Taint analysis can come in dynamic and static flavors, and be performed at the source and binary levels, but either way the goal is the same. Start by looking at where input comes from and is controlled by the user, e.g. a web app field to perform a search. These are named sources in this context. Then, continue to follow the thread to where it gets used by the system in a security-critical fashion, e.g. using that info to query a database, to continue with the previous example. These points are called sinks.
Along the way, data should face a serious dose of input sanitization or validation, in the case of a secure application. These are called sanitizers in the taint analysis context. However, it is frequent to see that it does not, and thus vulnerabilities arise.
DeepCode aims to iron out the wrinkles in these taint analysis tools by learning from the vast amounts of freely available, high-quality code that lives in open repositories such as Github, a situation dubbed "Big Code". The tool is easy to use and free for open-source projects, which has the added advantage of also learning from the user’s code, the suggestions made by the tool, and the user’s feedback as well (accepting or not the suggestion, how they fix them, etc).
Another problem with taint analysis is that sources, sinks and sanitizers need to be specified by hand, which is obviously impractical for large scale projects. This is another area where ML helps DeepCode, but, of course, the secret sauce is not available for further peeking.
DeepCode has been called Grammarly for code, claims to achieve 90% precision, understand the intent behind the code, find twice as many issues as other tools, and critical issues at that, like XSS, SQL injection and path traversal, which typical static analysis tools do not. Also, it claims to be easy to use, requiring no configuration at all.
The tool is certainly friendly. You need only point it to your repository and give the appropriate permissions, and then it will show you a dashboard with all the issues found. Here is one for Eclipse Che Cloud IDE:
Here we can see three instances of a possible path traversal vulnerability. In the full dashboard, we can also see how they also report an insecure HTTPS channel, a Server Side Request Forgery (SSRF), a Cross Site Scripting (XSS) vulnerability, an a header that leaks technical information (X-Powered-By). And that’s only the issues tagged as "security". There are also API misuse issues, v.g. using Thread.run() instead of Thread.start(), general bugs or defects, and now they even throw lint tools results, which deal with formatting and presentation issues. Oh, yes, and every issue comes with a possible fix you might implement right away.
Quite nice, from the point of view of contributing a new vulnerability report to a project, with no false positives. However when the aim is to find all vulnerabilities, one cannot help but raise the question: is that all? Are these all the security vulnerabilities in a project with more than 300,000 lines of code?
I forked both of these on Github, signed up for a DeepCode account, and let it run. For DSVW, which is a single Python file under 100 lines of code, but still ridden with vulnerabilities, DeepCode reports zero issues. Perhaps it does not work as well in such tiny projects.
This is, to say the least, disappointing, since that DSVW has no less than 26 different kinds of vulnerabilities, as per its README. In Writeups three of those have been manually explored and exploited.
Maybe it’s a problem with having so few lines of code, maybe it’s a Python thing, so let’s try the other one: DVNA, built with NodeJS with the specific purpose of demonstrating the OWASP Top 10 vulnerabilities.
This time around, DeepCode found 9 issues. Out of those, take 3 which come from ESLint, and let us consider the other 6. 2 API misuses, which are basically "use arrows instead of functions". 4 are security vulnerabilities, and pretty serious ones at that:
Code Injection via eval function in calculator module. Not the same one as in the authors' security guide. Also not yet reported in Writeups This should be researched further.
Technical information leakage via the X-Powered-By header, as in Che.
So, altogether, 3 noteworthy security vulnerabilities, in a NodeJS application with more than 7,500 lines of code. In Writeups, at least 29 different vulnerabilities have been reported in DVNA. You can see a report on manual testing vs the LGTM code-as-data tool in there, too, where it is quite clear that tool misses most of the vulnerabilities as well.
OK, now for a more realistic test, let’s try running DeepCode on some of our own repos, namely, Integrates, our platform for vulnerability centralization and management and Asserts, our vulnerability automation framework. Both are open-source, written in Python, and actively developed. As before, the vast majority of issues found by DeepCode are of the lint and API usage kind.
In Integrates we see a possible command injection in the spreadsheet report generation function. However, this input is not controllable by the user, so this does not pose a real threat at the moment:
However, the suggestion to sanitize the input via subprocess.call() is not bad. Who knows if Integrates will later have user configurable passwords for reports? Or what if a different vulnerability enables an attacker to change this parameter?
The other security issue is in the PDF report generation, this time identified as Path traversal. Again, probably difficult to exploit, but should be sanitized anyway.
In Asserts, however, the 15 issues found by DeepCode are less worrisome, for two reasons:
Asserts is not a client-server application, but an API that runs locally.
Most of the 15 issues are several instances of SSRF, when Asserts makes HTTP requests via Requests, generally to client’s ToEs as one would in a browser.
Of course, all the issues detected by DeepCode will be taken care of.
Once again, this confirms the other mantra we have held in this Machine Learning (ML) series, but also elsewhere in the site. Automated tools, even ML-powered ones, while they might have the potential to do what a human could not in a thousand years in terms of repetitions and scalability, do not have, as of yet, the malice and creativity which we do to find critical and interesting security vulnerabilities.
V. Raychev. 2018. DeepCode releases the first practical anomaly bug detector.
V. Chibotaru. 2019. Meet the tool that automatically infers security vulnerabilities in Python code. Hackernoon
with an itch for CS