Young hacker smiling

Zero false positives

Expert intelligence + effective automation

Git. Photo by Yancy Min on Unsplash:

Big Code

Learning from open source
Deepcode is a new player in machine learning for the vulnerability discovery field. It has a lot of potentials to find bugs in your code by learning from the abundant sources of high-quality code available in Github. Let's see how it works and if it delivers.

In our Machine Learning (ML) for secure code series the mantra has always been the same: to figure out how to leverage the power of ML to detect security vulnerabilities in source code, regardless of the technique, be it deep learning, graph mining, natural language processing, or anomaly detection.

In this article we present a new player in the field: DeepCode, a system that has exactly this purpose, combining ML with data flow analysis, namely in the form of taint analysis.

Taint analysis can come in dynamic and static flavors, and be performed at the source and binary levels, but either way the goal is the same. Start by looking at where input comes from and is controlled by the user, e.g. a web app field to perform a search. These are named sources in this context. Then, continue to follow the thread to where it gets used by the system in a security-critical fashion, e.g. using that info to query a database, to continue with the previous example. These points are called sinks.

Taint analysis diagram
Figure 1. Taint analysis diagram via Coseinc.

Along the way, data should face a serious dose of input sanitization or validation, in the case of a secure application. These are called sanitizers in the taint analysis context. However, it is frequent to see that it does not, and thus vulnerabilities arise.

Traditional taint analysis tools, however, usually present high false positive rates, as is the case with Bandit and Pyt (see some criticism here).

DeepCode aims to iron out the wrinkles in these taint analysis tools by learning from the vast amounts of freely available, high-quality code that lives in open repositories such as Github, a situation dubbed "Big Code". The tool is easy to use and free for open-source projects, which has the added advantage of also learning from the user’s code, the suggestions made by the tool, and the user’s feedback as well (accepting or not the suggestion, how they fix them, etc).

Another problem with taint analysis is that sources, sinks and sanitizers need to be specified by hand, which is obviously impractical for large scale projects. This is another area where ML helps DeepCode, but, of course, the secret sauce is not available for further peeking.

DeepCode has been called Grammarly for code, claims to achieve 90% precision, understand the intent behind the code, find twice as many issues as other tools, and critical issues at that, like XSS, SQL injection and path traversal, which typical static analysis tools do not. Also, it claims to be easy to use, requiring no configuration at all.

The tool is certainly friendly. You need only point it to your repository and give the appropriate permissions, and then it will show you a dashboard with all the issues found. Here is one for Eclipse Che Cloud IDE:

Dashboard for Eclipse Che
Figure 2. Security issues dashboard for Eclipse Che, adapted from DeepCode demo.

Here we can see three instances of a possible path traversal vulnerability. In the full dashboard, we can also see how they also report an insecure HTTPS channel, a Server Side Request Forgery (SSRF), a Cross Site Scripting (XSS) vulnerability, an a header that leaks technical information (X-Powered-By). And that’s only the issues tagged as "security". There are also API misuse issues, v.g. using instead of Thread.start(), general bugs or defects, and now they even throw lint tools results, which deal with formatting and presentation issues. Oh, yes, and every issue comes with a possible fix you might implement right away.

Quite nice, from the point of view of contributing a new vulnerability report to a project, with no false positives. However when the aim is to find all vulnerabilities, one cannot help but raise the question: is that all? Are these all the security vulnerabilities in a project with more than 300,000 lines of code?

Let us try something different. Let us take one of the many Vulnerable by Design (VbD) applications we use for training purposes in our Writeups, and see how many vulnerabilities come up by running DeepCode on them. By the way, they currently support Javascript, TypeScript and Java, besides the original Python. That leaves us with two apps to try: the Damn Vulnerable NodeJS Application (DVNA) and Damn Small Vulnerable Web (DSVW), since most VbD apps are built with PHP.

I forked both of these on Github, signed up for a DeepCode account, and let it run. For DSVW, which is a single Python file under 100 lines of code, but still ridden with vulnerabilities, DeepCode reports zero issues. Perhaps it does not work as well in such tiny projects.

Dashboard for DSVW
Figure 3. Zero issues in DSVW.

This is, to say the least, disappointing, since that DSVW has no less than 26 different kinds of vulnerabilities, as per its README. In Writeups three of those have been manually explored and exploited.

Maybe it’s a problem with having so few lines of code, maybe it’s a Python thing, so let’s try the other one: DVNA, built with NodeJS with the specific purpose of demonstrating the OWASP Top 10 vulnerabilities.

This time around, DeepCode found 9 issues. Out of those, take 3 which come from ESLint, and let us consider the other 6. 2 API misuses, which are basically "use arrows instead of functions". 4 are security vulnerabilities, and pretty serious ones at that:

  • Code Injection via eval function in calculator module. Not the same one as in the authors' security guide. Also not yet reported in Writeups This should be researched further.

  • SQL injection. As per security guide and Writeups.

  • Open Redirect. Also in the security guide and Writeups.

  • Technical information leakage via the X-Powered-By header, as in Che.

So, altogether, 3 noteworthy security vulnerabilities, in a NodeJS application with more than 7,500 lines of code. In Writeups, at least 29 different vulnerabilities have been reported in DVNA. You can see a report on manual testing vs the LGTM code-as-data tool in there, too, where it is quite clear that tool misses most of the vulnerabilities as well.

OK, now for a more realistic test, let’s try running DeepCode on some of our own repos, namely, Integrates, our platform for vulnerability centralization and management and Asserts, our vulnerability automation framework. Both are open-source, written in Python, and actively developed. As before, the vast majority of issues found by DeepCode are of the lint and API usage kind.

Integrates Dashboard
Figure 4. Integrates Dashboard

In Integrates we see a possible command injection in the spreadsheet report generation function. However, this input is not controllable by the user, so this does not pose a real threat at the moment:

Command Injection in Integrates?

Command Injection in Integrates?

However, the suggestion to sanitize the input via is not bad. Who knows if Integrates will later have user configurable passwords for reports? Or what if a different vulnerability enables an attacker to change this parameter?

The other security issue is in the PDF report generation, this time identified as Path traversal. Again, probably difficult to exploit, but should be sanitized anyway.

Asserts Dashboard
Figure 5. Asserts Dashboard

In Asserts, however, the 15 issues found by DeepCode are less worrisome, for two reasons:

  • Asserts is not a client-server application, but an API that runs locally.

  • Most of the 15 issues are several instances of SSRF, when Asserts makes HTTP requests via Requests, generally to client’s ToEs as one would in a browser.

Of course, all the issues detected by DeepCode will be taken care of.

Once again, this confirms the other mantra we have held in this Machine Learning (ML) series, but also elsewhere in the site. Automated tools, even ML-powered ones, while they might have the potential to do what a human could not in a thousand years in terms of repetitions and scalability, do not have, as of yet, the malice and creativity which we do to find critical and interesting security vulnerabilities.


  1. V. Raychev. 2018. DeepCode releases the first practical anomaly bug detector.

  2. V. Chibotaru. 2019. Meet the tool that automatically infers security vulnerabilities in Python code. Hackernoon

Author picture

Rafael Ballestas


with an itch for CS


Service status - Terms of Use