Difference between revisions of "CCS2011: Enemy of the Good"

From Soma-notes
Jump to navigation Jump to search
Line 34: Line 34:
=The False Alarm Problem=
=The False Alarm Problem=


Sample papers from x numbers of years back (after the Dennings paper was published and anomaly based detection started).
Stolfo's comment from his papers titled: "Anomalous Payload-Based Network Intrusion Detection"


Discuss different types of Algorithm used (eg. Lookahead pairs, Neural network, etc..) (misuse detection vs anomaly detection?)
"The False Positive rate of Anomaly Detection systems are typically regarded as an inhibitor to their wide spread use. In
 
this work, for example, 0.1% FP rate means that about 1 per thousand packets are flagged as anomalous. Such a rate might
Graph the false positive rate results (normalize units)
be considered untenable, rendering anomaly detection systems unusable. This argument is not quite correct.
 
We shall not argue that the False Negative rate of signature-based misuse detection systems causes far more problems
Look at the results and laugh.
than a false alarm. Rather, we make the assertion that it may be better to generate more anomaly detector alerts (and
 
consequently possibly more false alerts) to provide more evidence to correlate with other sensors to better detect a true
Point out (and discuss) how there is no real change in the fp rate
attack. Those anomaly detector alerts that have no other confirmatory evidence of an attack from another sensor might be
 
ignored. Those that correlate with other anomalous events would tend to strengthen the likelihood that a security event has
Bibliography (abstracts?)
indeed occurred hence generating very interesting alarms. This means that one should not view an anomaly detection
system as a singular monolithic detector, but a component in a correlated set of detectors, including possibly misuse
detectors"


=Other Critiques of IDS=
=Other Critiques of IDS=

Revision as of 08:51, 4 April 2011

Title

The Enemy of the Good: Re-evaluating Research Directions in Intrusion Detection

Abstract

Research in intrusion detection is in decline---there is less and less work being published in the field in competitive venues. Here we argue that a key reason for this decline is because of a misunderstanding of the significance and nature of false positive rates. False positives---legitimate behavior that is mis-classified as being potentially malicious---have a huge impact on the viability of any intrusion detection method in the real world. A survey of the literature, however, shows that false positive rates have remained persistently high in published reports. In this paper we argue that this persistence is due to the nature of the data sources used by intrusion detection systems. In support of this position, we present the requirements for viable intrusion detection systems, correlate those requirements with those of accurate detection methods, and then show that existing data sources cannot be so accurately modeled. To address these observations, we argue that research in intrusion detection must move away from the pure study of detection methods and towards the study of deployable detection/response mechanisms that directly accommodate relatively high false positive rates.

Introduction

As a research field, intrusion detection should be booming. Attackers are using increasingly sophisticated attacks, allowing them to compromise systems and exfiltrate data from major corporations and governments. At the same time, regular users are threatened daily with everything from drive-by downloads to phishing attacks to cross-site scripting exploits. In this environment where attackers are routinely circumventing existing defenses, we clearly need better ways to detect security violations. Intrusion detection should be an obvious approach to addressing this problem. Despite the clear need, however, there are signs everywhere that the field is in decline. General security venues publish few papers in intrusion detection, and even specialized venues such as RAID are publishing fewer and fewer intrusion detection papers.

It is not hard to realize why interest in the field is fading: existing intrusion detection systems are just not that useful, and research systems don't seem to improve on this situation. Mainstream signature-based IDSs require extensive tuning to deliver reasonable rates of false alarms, and even then they are extremely limited in their ability to detect novel attacks. Research systems, particularly ones based on machine learning methods, are often computationally expensive and/or can be circumvented by sophisticated attackers. But what is worse with these systems is that they, too, have high false alarm rates, but their rates are not so easily tuned. Given this dismal situation, is it any wonder that researchers would look for greener pastures?

In this paper we aim to reframe the intrusion detection problem such that the limitations of past work can be seen not as failures, but merely as incomplete portions of a larger whole. To this end, we focus on the problem of false alarms. Expensive detection methods can be addressed by better algorithms and data collection methods. Limited attack coverage or susceptibility to evasive attackers can be addressed by using addition defense mechanisms. High false alarm rates, however, are crippling: they turn potentially useful detectors into mechanisms that system administrators and users will turn off or simply ignore. Simply put, intrusion detection is not a viable defense strategy unless the false alarm problem can be adequately solved.

Much past work in intrusion detection has glossed over the false positive problem by arguing that they can be arbitrarily reduced through appropriate engineering, e.g., by correlating them with the output of other sensors and detectors. While we agree that there are a variety of ways false positives can be reduced in theory, it is remarkable to note that in published work, virtually nobody is able to get false alarm rates low enough to make intrusion detection viable, even with the use of false alarm mitigation strategies.

The false positive problem is the central problem of intrusion detection, and it is a problem that arises not from the limitations of algorithms but from the nature of the problem itself. In particular, at the high data rates that most intrusion detection systems deal with, "rare events" (events of low probability) happen at a surprisingly large absolute frequency---they can sometimes occur thousands of times a day. And, when they do happen, they can generate alarms in any kind of IDS, whether it be a signature, specification, or anomaly-based system. The challenge, then, is not to model these rare events (that's impossible) but to instead develop intrusion detection and response systems that perform well even in the face of rare, unmodeled events.

To explain why the direction of IDS research needs to change, we first start with a set of requirements for a viable (deployable) intrusion detection system. Next, in Section 3, we explore machine learning methods and examine under what circumstances machine learning methods can perform with the accuracy required for traditional framings of the intrusion detection problem. In Section 4 we then examine the nature of the standard data sources used by IDSs and show that they are fundamentally not amenable to the kind of accurate modeling required to give sufficiently low false positive rates. Section 5 surveys the literature and shows that past work in IDS is consistent with our conclusions. We explore the implications of our observations on the field in Section 6. Section 7 concludes.

Intrusion Detection Requirements

State of the Art in Machine Learning

Colin's section

Characteristics of IDS Data

Luc's section

The False Alarm Problem

Stolfo's comment from his papers titled: "Anomalous Payload-Based Network Intrusion Detection"

"The False Positive rate of Anomaly Detection systems are typically regarded as an inhibitor to their wide spread use. In this work, for example, 0.1% FP rate means that about 1 per thousand packets are flagged as anomalous. Such a rate might be considered untenable, rendering anomaly detection systems unusable. This argument is not quite correct. We shall not argue that the False Negative rate of signature-based misuse detection systems causes far more problems than a false alarm. Rather, we make the assertion that it may be better to generate more anomaly detector alerts (and consequently possibly more false alerts) to provide more evidence to correlate with other sensors to better detect a true attack. Those anomaly detector alerts that have no other confirmatory evidence of an attack from another sensor might be ignored. Those that correlate with other anomalous events would tend to strengthen the likelihood that a security event has indeed occurred hence generating very interesting alarms. This means that one should not view an anomaly detection system as a singular monolithic detector, but a component in a correlated set of detectors, including possibly misuse detectors"

Other Critiques of IDS

Discuss past work on criticizing IDS research

Potential Solutions

Discussion

synthetic versus real data issue attack distribution issue

Conclusion

References