CCS2011: Enemy of the Good: Difference between revisions

From Soma-notes
Line 8: Line 8:


=Introduction=
=Introduction=
As a research field, intrusion detection should be booming.  Attacker are using increasingly sophisticated attacks, allowing them to compromise systems and exfiltrate data from major corporations and governments.  At the same time, regular users are threatened daily with everything from drive-by downloads to phishing attacks to cross-site scripting exploits.  In this environment where attackers are routinely circumventing existing defenses, we clearly need better ways to detect security violations.  Intrusion detection should be an obvious approach to addressing this problem. Despite the clear need, however, there are signs everywhere that the field is in decline.  General security venues publish few papers in intrusion detection, and even specialized venues such as RAID are publishing fewer and fewer intrusion detection papers.
It is not hard to realize why interest in the field is fading: existing intrusion detection systems are just not that useful, and research systems don't seem to improve on this situation.  Mainstream signature-based IDSs require extensive tuning to deliver reasonable rates of false alarms, and even then they are extremely limited in their ability to detect novel attacks.  Research systems, particularly ones based on machine learning methods, are often computationally expensive and/or can be circumvented by sophisticated attackers.  But what is worse with these systems is that they, too, have high false alarm rates, but their rates are not so easily tuned. 


While research into intrusion detection started in the late 1970's, the area became popular in the mid-1990's as the World Wide Web took off.  Security researchers knew that the increasing use of the Internet would bring with it
While research into intrusion detection started in the late 1970's, the area became popular in the mid-1990's as the World Wide Web took off.  Security researchers knew that the increasing use of the Internet would bring with it

Revision as of 04:09, 23 March 2011

Title

The Enemy of the Good: Re-evaluating Research Directions in Intrusion Detection

Abstract

Research in intrusion detection is in decline---there is less and less work being published in the field in competitive venues. Here we argue that a key reason for this decline is because of a misunderstanding of the significance and nature of false positive rates. False positives---legitimate behavior that is mis-classified as being potentially malicious---have a huge impact on the viability of any intrusion detection method in the real world. A survey of the literature, however, shows that false positive rates have remained persistently high in published reports. In this paper we argue that this persistence is due to the nature of the data sources used by intrusion detection systems. In support of this position, we present the requirements for viable intrusion detection systems, correlate those requirements with those of accurate detection methods, and then show that existing data sources cannot be so accurately modeled. To address these observations, we argue that research in intrusion detection must move away from the pure study of detection methods and towards the study of deployable detection/response mechanisms that directly accommodate relatively high false positive rates.

Introduction

As a research field, intrusion detection should be booming. Attacker are using increasingly sophisticated attacks, allowing them to compromise systems and exfiltrate data from major corporations and governments. At the same time, regular users are threatened daily with everything from drive-by downloads to phishing attacks to cross-site scripting exploits. In this environment where attackers are routinely circumventing existing defenses, we clearly need better ways to detect security violations. Intrusion detection should be an obvious approach to addressing this problem. Despite the clear need, however, there are signs everywhere that the field is in decline. General security venues publish few papers in intrusion detection, and even specialized venues such as RAID are publishing fewer and fewer intrusion detection papers.

It is not hard to realize why interest in the field is fading: existing intrusion detection systems are just not that useful, and research systems don't seem to improve on this situation. Mainstream signature-based IDSs require extensive tuning to deliver reasonable rates of false alarms, and even then they are extremely limited in their ability to detect novel attacks. Research systems, particularly ones based on machine learning methods, are often computationally expensive and/or can be circumvented by sophisticated attackers. But what is worse with these systems is that they, too, have high false alarm rates, but their rates are not so easily tuned.

While research into intrusion detection started in the late 1970's, the area became popular in the mid-1990's as the World Wide Web took off. Security researchers knew that the increasing use of the Internet would bring with it

Intrusion Detection Requirements

State of the Art in Machine Learning

Colin's section

Characteristics of IDS Data

Luc's section

The False Alarm Problem

(need better title)

Mohamed's section

Other Critiques of IDS

Discuss past work on criticizing IDS research

Potential Solutions

Discussion

synthetic versus real data issue attack distribution issue

Conclusion

References