Can Code Be translated?

From code to words

Blog Can Code Be translated?

| 4 min read

Table of contents

Contact us

Now that we have a better understanding of how natural language and code embeddings work, let us take a look at a work by the same authors of code2vec, entitled code2seq: generating sequences from code [1]. What sequences? you might ask. Sequences of natural language, which might have different applications according to the given training data. In the original paper, they propose some applications:

  • Code summarization, i.e., explain in a few words what a snippet of code does, although not necessary in articulate language.

  • Code captioning, which is pretty much the same, only properly written.

  • Even automatic code documentation, in particular, generate JavaDoc documentation given a Java method.

A picture says more than a thousand words:

Sample prediction and generated AST

Sample prediction and generated AST via demo site.

Notice that the AST says even less about what this snippet does than the code itself, in my opinion. And yet code2seq sort of manages to understand the intent of this function, which is to generate a prime number for an RSA key. The prediction for the summary of this method is: generate prime number. Not too shabby.

So, how does it work? Again, as in code2vec they use randomly taken AST paths from one leaf token to another leaf for the initial representation of code,

Paths in an AST

Paths in an AST. From [1].

This representation, according to them, is fairly standard representation of code for machine learning purposes, and has a few advantages, namely:

  • It does not require semantic knowledge.

  • Works across programming languages.

  • It is not needed to hard-code human knowledge into features.

However, as with code2vec, one requires a specific extractor (essentially a tool to parse the code and extract the AST in a specific format understandable by code2*) for each language one intends to analyze. One key difference with code2vec is the use of the long short-term memory (LSTM) neural network architecture, which is used to encode each AST path from the previous step as a sequence of nodes. Otherwise the architecture is pretty similar:

Code2seq architecture

code2seq architecture. From [1].

As with code2vec, their main secret sauce lies in the attention mechanisms, and the encoding and decoding layers which sort of resemble the inner workings of an autoencoder, which we met earlier and serves as a stepping stone into understanding the vector representation of code and other objects.

Get started with Fluid Attacks' Secure Code Review solution right now

Another intersecting under the hood idea of code2seq is to take after seq2seq models, which are widely used in natural language translation with neural networks (neural machine translation). The idea is to connect two separate neural networks: one for encoding the source language and one for decoding into the target language. This already suggests an intermediate representation, a 'universal language' of sorts, that only these kind of networks understand. Again, this is a bit reminiscent of the autoencoder example and most likely stemmed from that seminal idea.

seq2seq diagram

seq2seq diagram, via d2l.ai.

Needless to say that this kind of translator networks achieve better than deterministic methods, and are in fact used in production translators nowadays. Not just that: they can be used not only for translation, for also v.g. for chatbots, by changing the training data: instead of giving pairs of sentences in different languages, just match questions with their answers, or sentences that naturally follow one another.

And, as we see here, with careful adjustment, the idea can be applied even to more structured languages, such as programming languages. The results are better than the current benchmarks, including the authors' own previous work, code2vec:

code2seq results

code2seq results.

The image to the left refers to the results from the summarization task with Java source code. Different methods (right) are compared using the F1 score (see discussion in our last article for details, but keep in mind this score balances how much is actually found and how much escapes). The one on the right does the same for the C captioning application, this time comparing the bilingual evaluation understudy (BLEU) scores, which are specific to machine translation. Clearly, for both tasks, code2seq beats the current state of the art.

As far as using it for our purpose and testing the accuracy, code2seq provides pretty much the same interface as code2vec, which you can check out in our last article, so we might expect the same ease of use. Only further experiments with the embeddings produced by this and code2vec will let us decide which one to go with for our classifier.

While code summarization and captioning are the only two applications researched by the authors, and documentation generation is proposed, this might have applications beyond that. One idea of the top of my head: while our code classifier is supposed to only give a probability of a file or function containing a vulnerability, it could also produce a list of the possible specific types of vulnerabilities. To reuse the example above, imagine that instead of predicting the words "generate prime number", it would predict "buffer overflow", assuming the function contained such a vulnerability, and perhaps other kinds of vulnerabilities with lower probabilities, such as "lack of input validation". That is an interesting direction to research, i.e., being more specific in the predictions, one that has been asked a lot during the talks, and one that we will certainly keep in mind.

Overall, code2seq is an innovative way of looking at the code-natural language relations, bringing into the game sophisticated techniques from the field of neural machine translation, and exploiting the rich syntax of code in the form of its AST, which as we haven seen throughout the series, is one of the simplest and most successful ways of representing code features. Stay tuned for more of this.

References

  1. U. Alon, M. Zilberstein, O. Levy, and E. Yahav. code2seq: Generating Sequences from Structured Representations of Code. ICLR'2019

Table of contents

Share

Subscribe to our blog

Sign up for Fluid Attacks' weekly newsletter.

Recommended blog posts

You might be interested in the following related posts.

Photo by Logan Weaver on Unsplash

Introduction to cybersecurity in the aviation sector

Photo by Maxim Hopman on Unsplash

Why measure cybersecurity risk with our CVSSF metric?

Photo by Jukan Tateisi on Unsplash

Our new testing architecture for software development

Photo by Clay Banks on Unsplash

Protecting your PoS systems from cyber threats

Photo by Charles Etoroma on Unsplash

Top seven successful cyberattacks against this industry

Photo by Anima Visual on Unsplash

Challenges, threats, and best practices for retailers

Photo by photo nic on Unsplash

Be more secure by increasing trust in your software

Start your 21-day free trial

Discover the benefits of our Continuous Hacking solution, which hundreds of organizations are already enjoying.

Start your 21-day free trial
Fluid Logo Footer

Hacking software for over 20 years

Fluid Attacks tests applications and other systems, covering all software development stages. Our team assists clients in quickly identifying and managing vulnerabilities to reduce the risk of incidents and deploy secure technology.

Copyright © 0 Fluid Attacks. We hack your software. All rights reserved.