Distributed OS: Winter 2011
Readings
January 13, 2011: CCR (two papers)
January 18, 2011: OceanStore and Pond
Internet Governance
Problems to Solve
- Attack computers with almost no consequences
- DDoS
- botnets
- capture and analyze private traffic
- distribute malware
- tampering with traffic
- Unauthorized access to data and resources
- Impersonate computers, individuals, applications
- Fraud, theft
- regulate behavior
Design Principles
- subjects of governance: programs and computers
- bind programs and computers to humans & human organizations, but recognize binding is imperfect
- recognize that "bad" behavior is always possible. "good" behavior is enforced through incentives and sanctions.
- rules will change. Even rules for rule changes will change. Need a "living document" governing how rules are chosen and enforced.
Scenarios
1: Stopping DDoS
Group members: Seyyed, Andrew Schoenrock, Thomas McMahon, Lester Mundt, AbdelRahman
- Have the machine routing packets(could be ISP provider) detect suspicious packets, if the packets are signed, then those suspicious packets could be blocked,
the sender could be put on a black list.
- Stopping DDoS against files, services, programs, etc
- Have file replication built into the system (similar to OceanStore) so that files are always available from different servers
- If files are not replicated then we could have a tiered messaging system (at the top level would be OS messages) and servers could then prioritize the incoming traffic. If a given server is experiencing an overload, it could send out a distress signal to its neighbours and then distribute what it is has to them. The system should have a built-in mechanism to re-balance the overall load after something like this happens. This would then mean that any DDoS attack would result in the service being more available.
- I like this idea of having service fallover
- Expanding on the idea of file replication and sending distress signals to it's neighbours, I could envision a group of servers that would learn to help each other out. Lending processing and storage when they are under utilized. The would sort of form a collective, club or gang. Members who didn't contribute ( always fully utilized ) would eventually be identified and banned. It would be these other computers that the targeted server would rely on for help in this situation. However cool this is it isn' really a solution because one could suppose the attackers might utilize the same strategy to recruit additional help in there attack.
- Stopping DDoS against specific machines
- I don't think that this should be specifically addressed. I think measures introduced to guard against this will ultimately negatively impact the overall system in terms of performance.
- I don't like the idea of sacrificing the one for the many though.
- Stopping DDoS
- Many of the DDoS attacks utilize the property of anonymity. These services serve anyone who requests there service. Many DDoS attacks then ensure sufficient traffic that the computer behind the service can no longer cope. If we remove anonymity and only serve 'known' parties the spurious requests would be ignored. So we need to 'know' who our friends are.
- This of course requires a form of unspoofable authentication unlike IP.
- Many of the DDoS attacks utilize the property of anonymity. These services serve anyone who requests there service. Many DDoS attacks then ensure sufficient traffic that the computer behind the service can no longer cope. If we remove anonymity and only serve 'known' parties the spurious requests would be ignored. So we need to 'know' who our friends are.
- Stopping DDoS
- I'm not sure what it means by stopping, I don't think we can stop DDos given the way things are currently ran, we can only block it. From my knowledge most softwares that stop DDoS do so by blocking, or even complete shut down like Mccolo.
- Stopping DDos
- One method is to use the same way of eliminating DoS by rejecting a specific rate of subsequent requests but from irrelevant sources.
- How we could stop DDoS would be to have each connection to the internet assigned to a particular identity. This identity would be used to verify who is attempting connections. The reason DDoS works is because currently, IP addresses can be spoofed. The only way to verify an identity is to request a response, but by then the damage is done. With a verified identity, connection attempts being routed can be verified during transmission, so that the request may not necessarily even reach the destination host.
Basically, we need some encryption system using keys so that as the packets are being routed, the identity of the packet's sender can be verified. Ideally the decryption would be trivial so as to prevent noticeable latency. Because an identity is verified, if there is spoofing of packets, they would be dropped during the routing. If all the identities are verified and are still attempting a DDoS attack, the attacker's identity will be traced back to the attacker.
2: Stopping phishing
Group members: Waheed Ahmed, Nicolas Lessard, Raghad Al-Awwad
- A way of automatically checking the signature of a message to make sure it really is from a trusted source.
- ie: "Nation of Banks, did your member TD send me a message to reset my password?"
- There should be filters to ensure where the message is coming from.If the message is coming from unknown source , it should be blocked.
- Don't use the links in an email to get to any web page, if you suspect the message might not be authentic.
- Avoid filling out forms in email messages that ask for personal financial information. Phishers can make exact forms which you can find on financial institution.
- Make is so a machine needs to be authorized to use your information -- A machine that you don't own can't use your information to do anything, regardless of whether he has it or not.
3: Limiting the spread of malware
Group members: keith, Andrew Luczak, David Barrera, Trevor Gelowsky, Scott Lyons
- (KM) Heterogenous systems - it is much easier to write code to attack a single type of system
- (KM) Individualized security policies
- (AL) A baseline security level would help prevent malware spreading to/from a system with "individual non-security"
- (KM) Identify all programs through digital signatures
- (KM) Peer rating system for programs, customize security policies based on peer ratings
- (SL) Need some way to keep rating system from being "gamed"
- (AL) Maybe a program gets flagged if it experiences a rapid approval increase?
- (AL) Need to protect against benign programs with good ratings being updated into malware
- (SL) Need some way to keep rating system from being "gamed"
- (KM) System level forensics on program execution and resource/file modification
- (KM) Customizable user and program blacklists
- (SL) Sandboxing with breach management - know what files have been modified by a process
- (SL) Trending - what does the application spend most of its time doing?
4: Bandwidth hogs
Group members: Mike Preston, Fahim Rahman, Michael Du Plessis, Matthew Chou, Ahmad Yafawi
- limit bandwidth for each user
- if user has significant bandwidth demands for a certain period of time
- add them to a watch list
- monitor their behaviour
- divert communication to other hosts that can satisfy requests.
- if there are no other hosts that can satisfy the request, then distribute data to other idle and capable hosts. Load is now reduced on the one link.
- QoS
- bandwidth management/scheduling (similar to OS scheduling)
- utilizing a round robin schedule to allow for periodic increases in bandwidth per user
- priority system that allows for more critical operations being done by a user to take precedence over others
- have the bandwidth separated evenly across all users and allow for users to donate their bandwidth amount for others to use, but can revoke it at any time
- Tiered Bandwidth Distribution
- The main idea is you get more bandwidth to your machine as much as you give back to the community. Its similar to some trackers and dark net programs in which they wont increase your download speed unless you contribute X amount of Bytes back to your peers.
- Tier 1, Basic privileges i.e. all machines have minimal bandwidth.
- Tier n, we define some requirements to be met then we increase bandwidth accordingly.
- Drop a Tier if machine doesn't maintain the specified requirements of that specific tier.
- Adv, monitoring bandwidth on the network is cheap while implementing what is stated above is not.