Felipe Ruiz
Attackers can indirectly instruct AI for malicious aims
We show you that LLM-integrated apps can suffer indirect prompt injection attacks through different methods that can put the security of their users at risk.
Felipe Ruiz
NIST sheds light on the classification of attacks on AI
Here is an overview of a recent NIST report on adversarial machine learning that could help us understand more about attacks against and from AI systems.
Julian Arango
A chat with Daniel Correa
We had the pleasure of chatting with Daniel Correa, a Security Expert who shared his views on current threats, human factors in cybersecurity, and technology.
Rafael Ballestas
With symbolic execution
Here's a reflection on the need to represent code before actually feeding it into neural network based encoders, such as code2vec, word2vec, and code2seq.
Rafael Ballestas
From code to words
Here we talk about Code2seq, which differs in adapting neural machine translation techniques to the task of mapping a snippet of code to a sequence of words.
Rafael Ballestas
Vector representations of code
Here is a tutorial on the usage of code2vec to predict method names, determine the accuracy of the model, and exporting the corresponding vector embeddings.
Rafael Ballestas
Vector representations of code
Here we discuss code2vec relation with word2vec and autoencoders to grasp better how feasible it is to represent code as vectors, which is our main interest.
Rafael Ballestas
Distributed representations of natural language
This post is an overview of word2vec, a method for obtaining vectors that represent natural language in a way that is suitable for machine learning algorithms.
Rafael Ballestas
Prioritize code auditing via ML
This post is a high-level review of our previous discussion concerning machine learning techniques applied to vulnerability discovery and exploitation.