SystemsSec 2016W Lecture 20

From Soma-notes
Revision as of 05:52, 27 March 2016 by Michelberg (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Topics and Readings

Notes

DoS (Denial of Service) Attacks

Introduction

A DoS attack occurs when a host system floods a target system or any of its resources with illegitimate traffic, preventing the legitimate requests from being processed. This flood can involve network queues or even a server's CPU or RAM resources, though the readings concentrate primarily on the networking aspects.

  • An example of a memory-based DoS attack involves Zip Bombs. A Zip Bomb is a large amount of arbitrary data that is efficiently compressed to a very small size. When it is opened by a server for malware detection purposes, it quickly exhausts the system's available RAM and potentially even its swap space, grinding the system to a hault.

A network-based DoS attack stems from a problem in a computer network's traffic management. Many systems do not have proper protections in place to deal with attack-style requests, so they are typically treated as legitimate traffic and as a result are not ignored.

  • An example of this is in a SYN flood attack, where the first part of a TCP connection (SYN message) is sent to the server without the attacker caring for the SYN-ACK response. The server will wait for the attacker's ACK message, keeping resources open for a set amount of time. When the server's resources eventually run out, the server will no longer accept new SYN requests from legitimate users.

The resulting effects are pretty simple: you can't use the network as intended. There are too many packets being received by the server, and yours can't get through.

DoS attacks can be on both public and private networks, though private attacks (e.g. preventing one specific n00b from accessing a lobby server after a CS:GO match didn't go your way) are far less of a concern than, say, a multinational banking server going down.

DDoS (Distributed Denial of Service) Attacks

Because no one computer is powerful enough to flood any sensible network nowadays, to effectively perform a DoS attack the barrage is carried out by many different computers instead of just one. The amount of traffic generated by multiple computers can more effectively overwhelm a target network's resources, especially when the individual attacking hosts—which need not belong to the attacker, in the case of public servers or compromised systems—are limited in their own bandwidth as well. This is called a Distributed Denial of Service attack, due to its distributed nature.

Amplification Attack

An amplification attack is an attack that is propagated on public networks. It is made possible by small requests that receive substantially larger responses from a server.

  • An example discussed in class is the NTP (Network Time Protocol) server, which responds to a single UDP "monlist" packet with a list of the last 600 IP addresses connected [article]. By spoofing the return address of many of these requests, a DDoS attack can be directed to a particular network.

Preventing a DoS Attack

In order to minimize or prevent the effects of an attempted DoS attack, a server needs a way to differentiate legitimate traffic from illegitimate traffic—preferably before the spurious packets propagate through your network. This can be done by routing major points in a network through powerful, high-bandwidth computers to process incoming traffic before it is allowed to propagate to the less powerful routers and switches that would be more devastated by large scale traffic floods. This can present some problems, however:

A traffic analogy:

  • Many cars drive along a highway. They typically travel at high speeds, utilizing and filling up the 4 lanes available.
  • When construction cuts the number of lanes down to one, every car must merge into one lane at the threshold. This slows traffic considerably.
  • A preventative measure is to redirect the traffic around these narrow roads.
    • Like with roads, a network's topology cannot typically reconfigure itself on the fly. Roads cannot be built and destroyed whenever traffic gets slow.
  • Another method for preventing traffic is to just vaporize the cars.

Going back to networking, the last point above is much easier when dealing with packets: just drop them. Typically legitimate traffic is of a certain type; we can drop the odd ones before they get to the inner network. Legitimate traffic can still be faked, but this can prevent some less sophisticated attack traffic. But for those remaining however, to know which packets are the ones that need to be dropped can be resource-intensive, even for powerful computers. Deep packet scanning or authentication works, but that takes time.

SDN (Software Defined Networking) and Bohatei

The actual process of networking is to route packets to their destinations. This is done by using routing tables and other networking rules. For high-speed routers, these rules are on specialized, super-fast pieces of silicon to make these table-lookups highly optimized and very efficient. Unfortunately it also makes it very proprietary and expensive. SDN (Software Defined Networking) removes some of these limitations by replacing the expensive silicon with flexible software designed for general purpose computers.

  • Some of the operations are therefore going to be slower. This is a given, and likely a fair trade.
  • This software is much more flexible and scalable than other physical DDoS protection deployments (expensive machines added to a network designed to handle DDoS-related attacks).

The paper discusses planning (what attacks you could get, not what attacks you will get) for attacks:

  • During certain attack scenarios, (e.g. SYN flood), it will route that specific type of traffic (i.e. SYN packets) using a different set of routing rules and network topologies.
    • The new (temporary) topology is not optimal, but that is because it is case-specific. As a result, there will be a degradation of service (SYN packets are slower), but the rest of the network should be able to deal with it and recover after the attack is finished by reverting back to the more optimal topologies and routing tables.
    • Going back to the traffic analogy, it would be like changing or redefining roads on the fly (changing directions of one-way streets... sort of like the Champlain Bridge between Ottawa and Gatineau).

A caveat for the above, related to the paper:

  • It is assuming that an entire network is running on SDN, which is not the case at present.
    • The paper does not cover hybrid solutions.
    • If a corporation is looking for DDoS protection services, it would likely be inclined to just add a specialized machine to handle that traffic instead of migrating the entire network over to SDN.
  • All classes of attacks need to be defined in advance for Bohatei and other software to operate properly.
  • The paper doesn't take into consideration friendly DDoS attacks (e.g. accidental DDoS when a low-traffic public webserver hosts content that suddenly goes viral).

Side Note

Attribution - The internet was not designed with metering, so how do you know who is responsible for a DDoS attack? When talking about attack packets (typically spoofed source addresses, sent from compromised machines), it is important to note that it is very difficult to legitimately trace these back to their sources. DDoS attacks are very noisy attack methods, meaning packets are leaked all over.

Pinning

CA (Certification Authority) Certificates & PKI (Public Key Infrastructure)

A modern web browser has a built in set of CA (certification authority) certificates. When you visit a website, the SSL connection's CA is verified by the browser. If your connection's certificate is signed by something else (e.g. an unauthenticated/expired certificate), then the connection is dropped. TLS is reliable, but once keys are changed, those TLS connections are refused.

  • Websites typically use only a certain set of certificates. For example, Facebook supposedly always goes with Verisign certificates (though upon verification it appears to be issued by DigiCert).
  • Wildcard certificates are certificates that can be used for any number of subdomains (e.g. *.facebook.com).
    • Wildcard certificates are typically less secure than individual (more specific) certificates.
    • More specific certificates can be specified (i.e. pinned) for a stricter subset of those subdomains, however. This is called Certificate Pinning.

Certificate Pinning

Pinning is the specification of which certificates you explicitly trust. It is a way to make verification more trustworthy (at the cost of speed).

Facebook example:

Verisign → FB1 → FB2
  • Verisign is the fingerprint (hashed public key used to represent a longer public key). This is the root certificate, and it is verified.
  • Each certificate down the chain is more specific. FB2 is a more specific CA certificate than FB1. If you "pin" a certificate further down the chain, you specify that you trust that less-generic certificate. More specific certificates are more secure.

Some Problems with Pinning

As a developer, you cannot change which CA certificates a user's browser will accept. As an app developer, you can leave certificate pinning to third-party libraries and services.

  • If you're using a third party library to handle certain networking connections however, there is a possibility that the library might be opening connections that you (the developer) know very little about.
    • An example of this? Ad networks. They talk to many servers, sometimes without even the developer's knowledge.

Certificates can instead be hard-coded to explicitly state which CA certificates your app deems acceptable. This can easily prevent connections to 50 servers when you only need 1, but certificates sometimes change. When that happens without the app being updated first, subsequent network connections are refused, and the app breaks.

Organizations also aren't disciplined in how they use their certificates—or if they are, there is no standard between organizations. Facebook could be using upwards of 1,000 certificates. Same with Google. Or they could be using one.