DistOS 2018F 2018-11-19

From Soma-notes
Revision as of 13:28, 19 November 2018 by Sheldon (talk | contribs) (→‎Notes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Readings

  • Anderson, "BOINC: A System for Public-Resource Computing and Storage" (Grid Computing 2004) (DOI) (Proxy)

Notes

Lecture Nov 19

- integrate, don’t explain the systems. Talk about the systems and the concepts. - essay form: intro, middle, conclusion. - 3 examples....could mention 8...make references, name drop

Question 1: - Unix Process model: what is a process model? The programming model, what a programmer sees when they run a program, what does the virtual computer look like, how does it function. Not simple, lots of hardware to implement. Virtual CPU is different from JVM or higher level language. UNIX has a specific way to abstract things, the UNIX process model does not scale, cannot run a process across many computers and run it like a process....had to do some fiddling with it. NFS...POSIX is stateful when open and close but NFS is stateless....create .NFS files...AFS messed up close....LOCUS had this run thing, instead of fork could just run it on another node....distributing makes it not look like UNIX and how? Not even going to distribute a process but at other levels of abstraction. Cannot do UNIX, can do almost like UNIX but....

Question 2: - Caching: improving performance but it is not a cost free thing. If cache, need to make sure things are consistent, how do you synchronize things...sacrifice performance or add complexity...talk about consistency and durability with caching. Spend more time talking about ideas opposed to describing the system, talk about the ideas first. Question 3: - trusted vs. Un-trusted Farsight: untrusted compared to NFS...trust the kernels but not the user space...have to trust something and don’t trust something else....i.e. user input but trusting the computer to run the code properly. Trusted you have less cost in terms of sharing, follow protocol, may have to do with failing but not malicious....if malicious, cannot trust computations they do. Most systems, first half about un-trusted...Oceanstore and Farsight were most untrusted. But AFS, people didn’t get the trust boundaries...trust the servers but not the workstations...access only their files and only for a limited period of time. Question 4: Concurrent access...access a file from more than one reader and writer what happens? Do you serialize or do something else.

Augmentation of human intelligence, bicycle for the mind demonstrated through Mother of All Demos doing really ordinary things...lists and collaboration...it didn’t inform what we were reading about...UNIX...NO! Powerful but not about HCI, lots of bad human factors...can do user groups but MULTIX and UNIX was time-sharing, resource sharing in computer networks was letting programmers run programs...not live our lives on computers and collaborate. It was all about resource sharing not the vision of Anglebart. What was Facebook’s big fancy systems for...photo sharing, how do we index the world...lookup email and shopping carts, the web started out for collaboration, hypertext and sharing information and comments to web pages...two way linking changed to one-way is Mother of All Demos vision....all comp scientists were making systems that were pathetic until they implemented his vision then they built systems that really scaled....not close at first b/c they would let people work in their own bubble but working together and collaborating was not there...architecture by big companies could have been implemented earlier in the 80s but they were not trying to solve that problem so didn’t need their systems to scale in that way.....it’s not his vision, it’s the other vision in the first half of the course. Question 5: accessing data from multiple readers and writers.


BOINC:

System to share resources to do computations. Central system that gives out work units. It is concurrent, highly concurrent. What is the process model? Small work unites redundantly, what is the system closest in model discussed earlier. Map-reduce....bunch of work stations and you give them each a unit of work then, that unit of work, once done, comes back and then combine the results. That reduce operation was often the bottleneck. What is the key difference between map reduce and BOINC. BOINC worker nodes are all untrusted which is why work units are given out redundantly. Map reduce does not do that, maybe a node might fail but they don’t assume someone has engineered their node to get fake results. That is what BOINC is designed for. What is the key feature that unified map reduce and BOINC....a problem they can divide up and cannot require communication between the pieces while calcs being done. So embarrassingly parallel. Map reduce, can be distributed to some degree, a functional operation that you get partial answers that can be combined together...assumed to be centralized at reduce but power of system is distribution.

What have we solved and what have we not solved? We have not solved the state sharing problem. It is unsolvable, sharing state is expensive. Can pretend that it is trivial on a single box. Work on zero copies and cache aware. If you want good performance, expensive. Network link, sharing state is painful so...avoid. Architectures that succeed put off and minimize sharing state...turn things that require acordination to be done seperately. Transaction log...time stamps and then see how they all fit together...keep fast until merge results and share enough state so that we don’t step on the toes of other parts.

BOINC, not sharing state, not trusting nodes, but can work for them. Not good for a weather simulation.

Cluster of small computers is a super computer. Massively parallel....lots of processing power, lots of disk and memory and CPUs but is lots of AWS instances a super computer? A super computer is a system with a architecture where exchanging state is less expensive...high bandwidth and low latency...sharing state efficiently. Some computations like simulations need it. US department of energy uses super computers for simulations...weather, how things happen, coordinating state. Facebook and Amazon...real world, don’t need that, don’t need to share state like that, minimize sharing of state, get scale-ability. A subclass of problems. The web maps very well onto this but other things don’t. 80s and 90s, some parallel algorithms got used, other were not used. Hot area that does require coordination of state...AI, matrix multiplication. Expensive, high-end computer science. Now advances in parallel processing while minimizing coordination of state...functional programming framework...make it paralleled by minimizing the sharing of state.....don’t have mutable state, the enemy of scaling. That is what people are implementing. Learn functional programming? YES. Worth spending time learning those ideas. What are the problems we are dealing with...sharing state, we cannot do it so, lets not. Can replicate the sharing of state but coordinating mutable state will kill performance. Everything kinda looks BOINC like at the end.

Test 2:

Questions to discuss:

- Sharing state: Strategies for sharing state; file sharing state, between processes...otherwise just talking about networking. Earlier in semester, sharing state among programs like process migration...clouds...memory stays put and CPU state moves around... how functions work...clouds just says across many systems, not just one system. Otherwise, more like process migration. Distributed shared memory, virtual memory across a cluster...multi-threaded process across many CPUs, page it out of one and page it into another. Might want to make a copy of it...all the same stuff but doing it at the level of memory for a process. The way anything is fast, have memory locality...distributed shared memory hides memory locality from your program. When access things that are not local, screwed so program must be aware of locality...why are you bothering...might as well manage state explicitly. Process concept, can access all memory fast relatively speaking.

Protein folding as a physics problem, it is a simulation...protein folding does not do that they do similarity...one sequence compared to another sequence...an hypothesis...physics based in the sense, have to try a little part of a simulation to make a boundary computation that can run on a computer....a couple of time steps.

Depends on how local you can make it. Protein interactions is another problem...computer science, how to solve the problem. Quantom computation...physics simulations.

Sharing state of a process we have seen but mostly it is process migration. How do you share access to data? Data taking on two basic forms, a file or a database of some kind. SQL or key value store or heirchical file system.

How do we access in parallel? Embarrassingly parallel, anti-social approach. The moment there are interactions what do we do? Concurrent reads and writes....concurrent writes influence concurrent reads. One way is to serialize access. Lots and lots of reads but the moment a writer comes along, need lock control....maybe serialize access to it...not scaleable. In your system, amount of collaboration is small, this is fine. What is the other solution for sharing access to a file? Appends. What is appending really? How is depend different from a write? Versioning, what is versioning. Taking shared state and parallising it...not really sharing state. Almost making it immutable....modelled as a set of new versions of the same file...latest version is the copy of the file you use...functional programming....appending onto it...a disciplined way of mutating data so that most of it is immutable...only mutation is adding something new (not deleting) easy to coordinate. Binding, scope is similar to versioning, appending is mutation, changing state but almost immutable. When doing appending, cannot do strict appending b/c it requires ordering which is expensive at scale so systems like GFS do not bother...duplicates maybe and then have to deal with that. Tricks to avoid sharing state. If have to share state, then have to serialize.

Platforms vs. Products. What is the difference?

Systems from Google: Systems connected together in layers, built on-top of previous solutions. Hierarchical. Intedepenency. Google philosophy is closer to Windows in how it is built. It all fits together and have dependencies on each other. When it all works together, works really well but have to be part of the system to use. A whole way of doing things. Insider vs. Outsiders. If allowed to outsiders, they find vulnerabilities and exploited so services so, there are trust boundaries. But trust enables a certain level of efficiency...scalled in term of computations not in terms of developers. Work at higher levels of abstraction.

Amazon: teams worked independently. Building pieces and solutions made available to everyone. Teams provide services. Individual services are self contained that anyone can use. Not higher level of abstraction but can assemble your with the services. Can just learn what you want, use the component and adopt it.

This is why Google is less successful in the cloud market. Google App engine...run app on Google infrastructure...own database and process model, write code to fit into their infrastructure that scales, is reliable etc. But you are writing to their specific system. But in Amazon, customized to you....some stuff independent some dependent but if find the piece you want, master it great! All of it, not possible.

Companies ship their org chart....timeline...this is how you are going to do things...cannot be trusting each-other, isolating groups. Build services to deal with too many requests. Amazon has been trying to do lots of things in an aggressive way, Darwinian process. Google doesn’t have that, much more top down development. On Amazon, if it is surviving, it survives, dying it dies. Amazon is more bottom up, Google is much more top down.

Talk about concepts on the test and drop references as appropriate.