DistOS 2018F 2018-10-29

From Soma-notes

Readings

Notes

Lecture Notes:

Peer-to-peer file sharing Napster -> classic silicon valley, business model that makes no sense Napster said, make all music available but not going to actually make the music available, use other people’s machines. Napster maintained a central directory but files stored on individual computers. People still wanted to exchange music files. Don’t have a centralized database of all the songs. DHTs are a technology...what is a hash table...give it a string and it gives you something else...where can I download this? So if implement a distributed hash table, no one system controlling the hash table...just use that and then you have to shutdown a bunch of nodes.

ISP throttling...now they poisoned it (record companies)....download something, what the heck is that? Idea behind Tapestry.....DHT as a service....Overlay network, what is it? Network that sits on-top of another network. Have the internet, isn’t the internet good enough? Point is that you want a different topology than what you have. Network based on geographic and organizational boundaries. Overlay, redo the tapology...send a message to neighbours, neighbours defined by an overlay network. i.e. Tor...defines its own topology. Facebook....social networks....send a message out, make a post, routed to friends, neighbours and they get it. Tapology of the social graph, who connects to who. Tapestry is an overlay network but does it ignore geography, no it makes use of it. Peers are nodes that are close by....the network are the systems running Tapestry but also identifies nearby nodes vs. Further distance nodes....network topology aware. Why? In large scale, will have a lot of node additions and deletions. Peer to Peer was, lets do file sharing but that was not their goal...building distributed applications of some kind. Pond...wanted something that provided a layer of messaging to their own nodes, that’s Tapestry .... table of hosts will not work....this provides the layer to find each other and send messages to each other in an efficient way. Lets build an infrastructure for sending messages, do we build apps on top of things like this today?

Keep an eye out for....DHTs will appear but their is a fundamental issue with DHT, the Limewire problem...they do very badly with untrusted nodes....can mess everyone else up by giving bad info to the network...can stop one or two nodes but attackers have significant resources like a botnet to attack your system.

Single Tapestry Node figure 6....the OS...nothing fancy, like the regular internet except they are maintaining state using a distributed hash table... Botnets...if they hard-code the IP address, how the botnet gets taken down...one way IRC, google search...use social media like an Instagram account....comments on celebrity Instagram feeds....spies used to send messages with ads in a newspaper or number stations (ham radio). Tapestry, trust issues b/c with trusted infrastructure there are better ways of doing it.

Ceph: Ceph is crazy, very complicated .... Ceph is out there, people are building this but really? What is CRUSH? Lets assume you have data, know where to go to get the parts of the files...would have to send a lot of data and update metadata every time you made changes to the file so they said we are not going to do that. These are not blocks, they are objects...what did they mean? Basically variable length chunk of storage, not dividing into fixed or variable size...file is some number of objects in some sort of order. When open file, what objects does it exist in but does the metadata storage give the objects? No, an algorithm to generate the names of the objects.

Metadata....store in memory every file ....hot spots for metadata access....3 would be maxed out and part of the system sleeping so the tree is dynamically re-partitioned to respond to hot spots.

Can have OSDs in parallel so asking for a file....distributed among lots of nodes so high performance ....many many computers talking to many many computers.

POSIX compatible (Ceph) impressive...POSIX compatibility is painful on writes...need to coordinate (centralize writes but that is slow). Can tell Ceph to be lazy. Take home lesson from Ceph....(all trusted, POSIX in distributed OS, can do it but OMG the admin overhead).

Tapestry...take home lesson...centralize node (trust)

Compare GFS with Ceph and Chubby (politically correct FAT storage :P) with Tapestry


More notes

  • Ceph
    • Very fast performance
    • Sharp drop off in performance when more than 24 OSDs due to switch saturation.
      • Ceph sending too many messages, leading to lots of packet loss on a switch.
    • Ceph is very complicated
    • CRUSH: Generating algorithm to find storage
    • Realized that there are hotspots for metadata access
      • Typically you statically partition the file tree, ceph dynamically repartitions the metadata tree.
    • Can ask for objects in parallel, download a file in parallel
    • Basically posix compatible
      • Easy to be compatible on reads, difficult on writes.
    • Take home lesson: Posix compatibility in a really big distributed file system, you CAN do it, but admin overhead is ridiculous.
    • Compare Cepth to the Google File System, google is also very scalable but not as complicated.

 

  • Tapestry
    • Limited performance
    • First p2p is napster
      • Classic silicon valley, business model that makes no sense
      • Use other peoples machines to make music available
      • Download music from another peers computers, napster maintained central directory.
      • File did not go through napster, they just pointed to it.
      • Sued out of existence
        • SOLUTION: Dont have a centralized location service.
    • A DHT is you give it a string and it tells you where you can download it.
    • Record companies poisoned torrents?? Not true imo
    • Tapestry is DHT as a service
    • Point of overlay network is you want a different topology over what you have
      • Don't want a geographic based network like the basic internet
    • Overlay network completely redoes topology, neighbors are defined by the overlay network.
    • Facebook, or all social media, is an overlay network, your posts are routed to all friends.
    • Tapestry does not ignore underlying physical topology, still takes locality into consideration.
    • Why?
    • Tapestry, in a sense, is exactly what this course is about
      • Let's build this infrastructure for distributed applications.
      • We do not
    • KEEP AN EYE OUT FOR: One fundamental issue with DHTs
      • Limewire problem: Do very bad with untrusted nodes.
    • Have to bootstrap it and find everyone.
    • How do botnets work? They use a legitimate service, like Twitter, Facebook, celebrity twitter feed comments.
      • Spies used to send messages through ads
    • Probably a trust issue as to why it isn't used
    • Take home lesson: Here's this thing for distributed applications, but trust issues.
    • Contrast Tapestry with Chubby