Machine Learning in an Adversarial Environment: Part 1

The applications and uses of Machine Learning have pervaded a lot of fields and security is no exception, from flagging anomalous behavioural patterns to classifying files as malicious and so on. The general use of machine learning as mentioned by Mitchell is in the areas in which it is too complex to define a solution or the environment in which there is too much flux, the environment dynamically evolves.Security is often thought of as a chess game in which the players not only try to out think and undermine each other, but must do so anticipating the other’s moves. Current deployed systems making use of machine learning include spam filtering systems, intrusion detection systems, virus detection and so on. In these systems the algorithm used (Markov Models, ANNs, SVMs etc) are retrained on new data to accommodate the latest patterns and trends. The problem arises due to the fact that adversaries are also aware of the fact that the retraining takes place. The study of how to protect these algorithms from an adversary who may use the knowledge of how the algorithm works to compromise the system is adversarial machine learning.

The basic problem with the above approach is that the learning part cannot be static, if the machine learning algorithm is not updated to the latest trends and patterns, the whole point of security becomes meaningless. And in this dynamic environment provides opportunities to cause mayhem, for eg. poison the learner’s classifications in a targeted manner [1], by crafting input data to make the learner infer features that are not actually malicious to increase false positives or bias the learnt patterns to cause some other ones to be misclassified. The types of attacks and their taxonomy are mentioned in the research paper [2] “Can Machine Learning be Secure?” by Marco Barreno et al. The relevant properties of attacks as explained in [2] are given below:

  • Influence of the Attack
    • Causative: Causative Attacks alter the training process through influence over the training data.
    • Exploratory: Exploratory attacks do not alter the training process but use other techniques such as probing the learner or offline analysis, to discover information.
  • Specificity of the Attack
    • Targeted: The specificity of the attack is a continuous spectrum. At the targeted end the focus of the attack is a small set of events or a particular event. For eg. misclassification of a specific family of malware etc. The aim is to get this particular malware through.
    • Indiscriminate: At the indiscriminate end, the adversary has a more flexible goal that involves a very general class of events, such as misclassification of any family of malware.The aim is to compromise the system and hence, the specific malware is crafted after the misclassification attack is executed.
  • Security Violation of the Attack
    • Integrity: An integrity attack results in the misclassification of a malicious event as benign ( false negative).
    • Availability: An availability attack is a broader class of attack than an integrity attack. An availability attack results in so many classification errors, both false negatives and false positives, that the system becomes effectively unusable.

By the above definition if an attack manages to poison the training data such that a malware classifier classifies a malware as benign would be a causative integrity attack

[1] Adversarial Machine Learning by Ling Huang et al.

[2] Can Machine Learning be Secure? by Marco Barreno et al

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s