<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=36chambers</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=36chambers"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/36chambers"/>
	<updated>2026-04-09T11:51:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=19088</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=19088"/>
		<updated>2014-04-24T17:39:46Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Embarrassingly Parallell */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Public Resource Computing (March 11)==&lt;br /&gt;
&lt;br /&gt;
* Anderson et al., &amp;quot;SETI@home: An Experiment in Public-Resource Computing&amp;quot; (CACM 2002) [http://dx.doi.org/10.1145/581571.581573 (DOI)] [http://dl.acm.org.proxy.library.carleton.ca/citation.cfm?id=581573 (Proxy)]&lt;br /&gt;
* Anderson, &amp;quot;BOINC: A System for Public-Resource Computing and Storage&amp;quot; (Grid Computing 2004) [http://dx.doi.org/10.1109/GRID.2004.14 (DOI)] [http://ieeexplore.ieee.org.proxy.library.carleton.ca/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382809 (Proxy)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Keywords ===&lt;br /&gt;
BOINC &amp;amp; SETI@Home: Lowered entry barriers, master/slave relationship, work units, [Embarrassingly_parallel http://en.wikipedia.org/wiki/Embarrassingly_parallel], inverted use-cases, gamification, redundant computing, consentual bot nets, centralized authority, untrusted clients, replication as reliability, exponential backoff, limited server reliability engineering.&lt;br /&gt;
&lt;br /&gt;
Embarrassingly parallel: ease of parallization, communication to computation ratio, discrete work units, abstractions help, map reduce.&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. &lt;br /&gt;
&lt;br /&gt;
In the past, it has been institutions, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institutiton (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. &lt;br /&gt;
&lt;br /&gt;
Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
Important to the design of the BOINC platform is that it was easily deployed by scientists (ie. non IT specialists). It was meant to lower the entry barrier for the types of scientific computing that lent itself to being embarrassingly parallel. The platform used a simple design with commodity software (PHP, Python, MySQL).&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients or faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
In the case of SETI@Home, the amount of available work units is fixed. The system scales by increasing the amount of redundant computing. If more clients join the system, they just end up getting the same work units.&lt;br /&gt;
&lt;br /&gt;
=== Comparison to Botnets ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. &lt;br /&gt;
&lt;br /&gt;
The answer: consent.&lt;br /&gt;
&lt;br /&gt;
You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want? You can&#039;t. You can only verify the results.&lt;br /&gt;
  &lt;br /&gt;
=== General Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems, we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Inverted use cases. In the files systems we have covered so far, clients would want access to the files stored in a network system, here a system wants to access clients&#039; machines to utilize the processing power of their machine. There is an inverted flow.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relationship between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
# Skype was modelled much like a public resource computing network (before Microsoft took over). The whole model of Skype was that the infrastructure just ran on the computers of those who had downloaded the clients (like a consensual botnet). Once a person downloaded the client, they would be a part of this system. As with public resource computing, you would donate some of your resources in order to support the distributed infrastructure. It was also not assumed that everyone was reliable, but would assume that some people are reliable some of the time. The network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who process the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers and bad results.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies. The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
&lt;br /&gt;
It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.&lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
When you are doing parallel computations, you have to do a mixture of computation and communication. You&#039;re doing computation separately, but you always have to do some communication. But, how much communication do you have to do for every unit of computation? In some cases, there are many dependencies meaning that a high amount of communication is required (e.g., weather system simulations).&lt;br /&gt;
&lt;br /&gt;
Embarrassingly parallel means that a given problem requires a minimum of communication between the pieces of work. This typically means that you have a bunch of data that you want to analyze, and it&#039;s all independent. Due to this, you can just split up and distribute the work for analysis. In an embarrassingly parallel problem, computations are trivial, due to the minimum of communication, as the more processors you add, the faster the system will run. However, problems that are not embarrassingly parallel, the system can actually slow down when more processors are added as more communication is required. With distributed systems, you either need to accept communications costs or modify abstractions to allow you to get closer to an embarrassingly parallel system. Since speedup is trivial when the problem is embarrassingly parallel, you don&#039;t get much praise for doing it.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of an &amp;quot;embarrassingly parallel&amp;quot; workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no effort required to distribute the work load in parallel.  &lt;br /&gt;
&lt;br /&gt;
One more example of &amp;quot;embarrassingly parallel&amp;quot; in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far that doesn&#039;t trust the clients can be modeled to work as a public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to MapReduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_19&amp;diff=19087</id>
		<title>DistOS 2014W Lecture 19</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_19&amp;diff=19087"/>
		<updated>2014-04-24T17:33:40Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Dynamo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Dynamo ==&lt;br /&gt;
&lt;br /&gt;
* Key value-store.&lt;br /&gt;
* Query model: key-value only&lt;br /&gt;
* Highly available, always writable.&lt;br /&gt;
* Guarantee Service Level Agreements (SLA).&lt;br /&gt;
* 0-hop DHT: it has direct link to the destination. Has complete view of system locally. No dynamic routing.&lt;br /&gt;
* Dynamo sacrifices consistency under certain failure scenarios.&lt;br /&gt;
* Consistent hashing to partition key-space: the output range of a hash function is treated as a fixed circular space or “ring”.&lt;br /&gt;
* Key-space is linear and the nodes partition it.&lt;br /&gt;
* ”Virtual Nodes”: Each server can be responsible for more than one virtual node.&lt;br /&gt;
* Each data item is replicated at N hosts.&lt;br /&gt;
* “preference list”: The list of nodes that is responsible for storing a particular key.&lt;br /&gt;
* Sacrifice strong consistency for availability.&lt;br /&gt;
** Eventual consistency.&lt;br /&gt;
* Decentralized, P2P, limited administration.&lt;br /&gt;
* it work with 100 servers,it is not bigger.&lt;br /&gt;
* Application/client specific conflict resolution.&lt;br /&gt;
* Designed to be flexible&lt;br /&gt;
** &amp;quot;Tuneable consistency&amp;quot;&lt;br /&gt;
** Pluggable local persistence: DBD, MySQL.&lt;br /&gt;
&lt;br /&gt;
Amazon&#039;s motivating use case is that at no point, in a customer&#039;s shopping cart, should any newly added item be dropped. Dynamo should be highly available and always writeable.&lt;br /&gt;
&lt;br /&gt;
Amazon has an service oriented architecture. A response to a client is a composite of many services, so SLA&#039;s were a HUGE consideration when designing Dynamo. Amazon needed low latency and high availability to ensure a good user experience when aggregating all the services together.&lt;br /&gt;
&lt;br /&gt;
Traditional RDBMS emphasise ACID compliance. Amazon found that ACID compliancy lead to systems with far less availability. It&#039;s hard to have consistency and availability both at the same time. See [http://en.wikipedia.org/wiki/CAP_theorem CAP Theorem]. Dynamo can, and usually does sacrifice consistency for availability. They use the terms &amp;quot;eventual consistency&amp;quot; and &amp;quot;tunable consistency&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Key range is partitioned according to consistent hashing algorithm,which treats the output range as a fixed circular space or “ring”. Any time a new node joins in, it takes a token which decides its position on the ring. Every node becomes owner of key range which is in between itself and the previous node on the ring, so anytime a node comes in or leaves it only affects its neighbor nodes. Dynamo has this notion of virtual node, where a machine actually can host more than one node and hence allows to adjust the load according to the machine&#039;s capability. &lt;br /&gt;
&lt;br /&gt;
Dynamo uses replication to provide availability, each key-value is distributed to N-1 node (N can be configured by the application that uses Dynamo).&lt;br /&gt;
&lt;br /&gt;
Each node has a complete view of the network. A node knows the key-range that every node supports.  Any time a node joins, the gossip based protocols are used to inform every node about the key range changes. This allows for Dynamo to be a 0-hop network. 0-hop means it is logically 0 hop network. IP routing is still be required to actually physically get to the node. This 0-hop approach is different from typical distributed hash tables where routing and hops are used to find the node responsible for a key (eg. Tapestry). Dynamo can do this because the system is deployed on trusted, fully known, networks.&lt;br /&gt;
&lt;br /&gt;
Dynamo is deployed on trusted networks (ie. for amazon&#039;s internal applications. It doesn&#039;t have to worry about making the system secure. Compare this to Oceanstore.&lt;br /&gt;
&lt;br /&gt;
When compared to BigTable, Dynamo typically scales to hundreds of servers, not thousands. That is not to say that Dynamo can not scale, we need to understand the difference between the use cases for BigTable and Dynamo.&lt;br /&gt;
&lt;br /&gt;
Any &amp;quot;write&amp;quot; that is done on any replica is never held off to serialize the updates to maintain consistency, it will eventually try to reconcile the difference between two different versions( based on the logs) if it can not do so, the conflict resolution is left to the client application which would read data from Dynamo(If there are more than versions of a replica, all the versions along with the log is passed to client and client should reconcile the changes)&lt;br /&gt;
&lt;br /&gt;
== Bigtable ==&lt;br /&gt;
&lt;br /&gt;
* BigTable is a distributed storage system for managing structured data.&lt;br /&gt;
* Designed to scale to a very large size&lt;br /&gt;
* More focused on consistency than Dynamo.&lt;br /&gt;
&lt;br /&gt;
* A BigTable is a sparse, distributed persistent multi-dimensional sorted map.&lt;br /&gt;
* Column oriented DB.&lt;br /&gt;
** Streaming chunks of columns is easier than streaming entire rows.&lt;br /&gt;
&lt;br /&gt;
* Data Model: rows made up of column families.&lt;br /&gt;
** Eg. Row: the page URL. Column families would either be the content, or the set of inbound links.&lt;br /&gt;
** Each column in a column family has copies. Timestamped.&lt;br /&gt;
&lt;br /&gt;
* Tablets: Large tables broken into tablets at row boundaries and each raw Tablet holds contiguous range of sorted rows.&lt;br /&gt;
** Immutable b/c of GFS. Deletion happens via garbage collection.&lt;br /&gt;
&lt;br /&gt;
* An SSTable provides a persistent,ordered immutable map from keys to values, where both keys and values are arbitrary byte strings.&lt;br /&gt;
* Metadata operations: Create/delete tables, column families, change metadata.&lt;br /&gt;
&lt;br /&gt;
===Implementation:===&lt;br /&gt;
&lt;br /&gt;
* Centralized, hierchy.&lt;br /&gt;
* Three major components: client library, one master server, many tablet servers.&lt;br /&gt;
&lt;br /&gt;
* Master server&lt;br /&gt;
** Assigns tablets to tablet server.&lt;br /&gt;
** Detects tablet additions and removals&lt;br /&gt;
** garbage collection on GFS.&lt;br /&gt;
&lt;br /&gt;
* Tablet Servers&lt;br /&gt;
** holds tablet locations.&lt;br /&gt;
** Manages multiple tablets (thousands per tablet server)&lt;br /&gt;
** Handles I/O.&lt;br /&gt;
&lt;br /&gt;
* Client Library&lt;br /&gt;
** What devs use.&lt;br /&gt;
** Caches tablet locations&lt;br /&gt;
&lt;br /&gt;
=== Consider the following ===&lt;br /&gt;
&lt;br /&gt;
Can big table be used in a shopping cart type of scenario, where low latency and availability are the main focus. Can it be used like Dynamo? Yes, it can, but not as well. Big Table would have more latency because it was designed for Data procession and was not designed to work under such a scenario. Dynamo was designed for different use cases. There is no one solution that can solve all the problems in the world of distributed file systems, there is no silver bullet, no - one size fits all. File systems are usually designed for specific use cases and they work best for them, later if the need be they can be molded to work on other scenarios as well and they may provide good enough performance for the later added goals as well but they would work best for the use cases,which were the targets in the beginnings.&lt;br /&gt;
&lt;br /&gt;
* BigTable -&amp;gt; Highly consistent, Data Processing, Map Reduce, semi structured store&lt;br /&gt;
* Dynamo -&amp;gt; High availability, low latency, key-value store&lt;br /&gt;
&lt;br /&gt;
== General talk ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Read the introduction and conclusion for each paper and think about cases in the paper more than look to how the author solve the problem.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=19086</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=19086"/>
		<updated>2014-04-24T17:15:55Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Trust Model and Fault Tolerance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Public Resource Computing (March 11)==&lt;br /&gt;
&lt;br /&gt;
* Anderson et al., &amp;quot;SETI@home: An Experiment in Public-Resource Computing&amp;quot; (CACM 2002) [http://dx.doi.org/10.1145/581571.581573 (DOI)] [http://dl.acm.org.proxy.library.carleton.ca/citation.cfm?id=581573 (Proxy)]&lt;br /&gt;
* Anderson, &amp;quot;BOINC: A System for Public-Resource Computing and Storage&amp;quot; (Grid Computing 2004) [http://dx.doi.org/10.1109/GRID.2004.14 (DOI)] [http://ieeexplore.ieee.org.proxy.library.carleton.ca/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382809 (Proxy)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Keywords ===&lt;br /&gt;
BOINC &amp;amp; SETI@Home: Lowered entry barriers, master/slave relationship, work units, [Embarrassingly_parallel http://en.wikipedia.org/wiki/Embarrassingly_parallel], inverted use-cases, gamification, redundant computing, consentual bot nets, centralized authority, untrusted clients, replication as reliability, exponential backoff, limited server reliability engineering.&lt;br /&gt;
&lt;br /&gt;
Embarrassingly parallel: ease of parallization, communication to computation ratio, discrete work units, abstractions help, map reduce.&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. &lt;br /&gt;
&lt;br /&gt;
In the past, it has been institutions, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institutiton (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. &lt;br /&gt;
&lt;br /&gt;
Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
Important to the design of the BOINC platform is that it was easily deployed by scientists (ie. non IT specialists). It was meant to lower the entry barrier for the types of scientific computing that lent itself to being embarrassingly parallel. The platform used a simple design with commodity software (PHP, Python, MySQL).&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients or faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
In the case of SETI@Home, the amount of available work units is fixed. The system scales by increasing the amount of redundant computing. If more clients join the system, they just end up getting the same work units.&lt;br /&gt;
&lt;br /&gt;
=== Comparison to Botnets ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. &lt;br /&gt;
&lt;br /&gt;
The answer: consent.&lt;br /&gt;
&lt;br /&gt;
You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want? You can&#039;t. You can only verify the results.&lt;br /&gt;
  &lt;br /&gt;
=== General Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems, we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Inverted use cases. In the files systems we have covered so far, clients would want access to the files stored in a network system, here a system wants to access clients&#039; machines to utilize the processing power of their machine. There is an inverted flow.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relationship between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
# Skype was modelled much like a public resource computing network (before Microsoft took over). The whole model of Skype was that the infrastructure just ran on the computers of those who had downloaded the clients (like a consensual botnet). Once a person downloaded the client, they would be a part of this system. As with public resource computing, you would donate some of your resources in order to support the distributed infrastructure. It was also not assumed that everyone was reliable, but would assume that some people are reliable some of the time. The network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who process the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers and bad results.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies. The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
&lt;br /&gt;
It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.&lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
When you are doing parallel computations, you have to do a mixture of computation and communication. You&#039;re doing computation separately, but you always have to do some communication. But, how much communication do you have to do for every unit of computation? In some cases, there are many dependencies meaning that a high amount of communication is required (e.g., weather system simulations).&lt;br /&gt;
&lt;br /&gt;
Embarrassingly parallel means that a given problem requires a minimum of communication between the pieces of work. This typically means that you have a bunch of data that you want to analyze, and it&#039;s all independent. Due to this, you can just split up and distribute the work for analysis. In an embarrassingly parallel problem, computations are trivial, due to the minimum of communication, as the more processors you add, the faster the system will run. However, problems that are not embarrassingly parallel, the system can actually slow down when more processors are added as more communication is required. With distributed systems, you either need to accept communications costs or modify abstractions to allow you to get closer to an embarrassingly parallel system. Since speedup is trivial when the problem is embarrassingly parallel, you don&#039;t get much praise for doing it.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of an &amp;quot;embarrassingly parallel&amp;quot; workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no effort required to distribute the work load in parallel.  &lt;br /&gt;
&lt;br /&gt;
One more example of &amp;quot;embarrassingly parallel&amp;quot; in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesn&#039;t trust the clients can be modelled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19085</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19085"/>
		<updated>2014-04-24T16:15:50Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Back to Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in computer science, particularly software tools, its important to have more than one good implementation. Only time it doesn&#039;t happen is because of market interference. An example of this was Microsoft Word, which took out most of the competition. Different implementations are commonly due to programmers disagreeing on concepts and designs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
Mutation is difficult to parallel due to the coordination of the current state. &lt;br /&gt;
To escape abstraction: create code, give/add it to the operating system, and change the semantics of the operating system to better suit the application. &lt;br /&gt;
In general, extensible systems are bad. &lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for low latency online updates and more optimized for reads. &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability. This extensibility only has use under undesirable circumstances.&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19084</id>
		<title>DistOS 2014W Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19084"/>
		<updated>2014-04-24T16:04:32Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Segue on drives and sequential access following GFS section */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==GFS and Ceph (Feb. 4)==&lt;br /&gt;
* [http://research.google.com/archive/gfs-sosp2003.pdf Sanjay Ghemawat et al., &amp;quot;The Google File System&amp;quot; (SOSP 2003)]&lt;br /&gt;
* [http://www.usenix.org/events/osdi06/tech/weil.html Weil et al., Ceph: A Scalable, High-Performance Distributed File System (OSDI 2006)].&lt;br /&gt;
&lt;br /&gt;
== GFS ==&lt;br /&gt;
GFS is a distributed file system designed specifically for Google&#039; needs and they made two assumption while designing GFS:&lt;br /&gt;
&lt;br /&gt;
# Most of the Data is written in the form of appends ( write at the end of a file).&lt;br /&gt;
# Data read from the files is read in a streaming sort of way ( read lot of data in the form of sequential access).&lt;br /&gt;
&lt;br /&gt;
Because of this, they decided to lay emphasis on better performance for sequential access. These two assumption are also the reason because of which they chose to keep the chunk size so huge (64 MB). You can easily read large blocks if you get rid of random access. Once data is written, it is rarely written &#039;&#039;&#039;over&#039;&#039;&#039; using random access.&lt;br /&gt;
&lt;br /&gt;
* Very different design because of the workload that it is designed for:&lt;br /&gt;
** Because of the number of small files that have to be indexed for the web, it is no longer practical to have a file system that stores these individually. Too much overhead. Easier to store millions of objects as large files. Punts problem to userspace, incl. record delimitation.&lt;br /&gt;
* Don&#039;t care about latency&lt;br /&gt;
** surprising considering it&#039;s Google, the guys who change the TCP IW standard recommendations for latency.&lt;br /&gt;
* Mostly seeking (sequentially) through entire file.&lt;br /&gt;
* Paper from 2003, mentions still using 100BASE-T links.&lt;br /&gt;
* Data-heavy, metadata light (unlike Ceph). Contacting the metadata server is a rare event.&lt;br /&gt;
* Consider hardware failures as normal operating conditions:&lt;br /&gt;
** uses commodity hardware&lt;br /&gt;
** All the replication (!)&lt;br /&gt;
** Data checksumming (few file systems do checksums)&lt;br /&gt;
* Performance degrades for small random access workload; use other filesystem.&lt;br /&gt;
* Path of least resistance to scale, not to do something super CS-smart.&lt;br /&gt;
* Google used to re-index every month, swapping out indexes. Now, it&#039;s much more online. GFS is now just a layer to support a more dynamic layer.&lt;br /&gt;
* The paper seems to lack any mention of security. This FS probably could only exist on a trusted network.&lt;br /&gt;
* Implements interface similar to POSIX, but not the full standard.&lt;br /&gt;
** &#039;&#039;&#039;create, delete, open, close, read, write&#039;&#039;&#039;&lt;br /&gt;
** Unique operations too: &#039;&#039;&#039;snapshot&#039;&#039;&#039; which is low cost file duplication and &#039;&#039;&#039;record append&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How other filesystems compare to GFS and Ceph ==&lt;br /&gt;
&lt;br /&gt;
* Other File Systems: AFS, NFS, Plan 9, traditional Unix&lt;br /&gt;
&lt;br /&gt;
* Data and metadata are held together.&lt;br /&gt;
** They did not optimize for different access patterns:&lt;br /&gt;
*** Data → big, long transfers&lt;br /&gt;
*** Metadata → small, low latency&lt;br /&gt;
** Can&#039;t scale separately&lt;br /&gt;
&lt;br /&gt;
* Designed for lower latency&lt;br /&gt;
&lt;br /&gt;
* (Mostly) designed for POSIX semantics&lt;br /&gt;
** how the requirements that lead to the ‘standard’ evolved&lt;br /&gt;
&lt;br /&gt;
* Assumed that a file is a fraction of the size of a server&lt;br /&gt;
** eg. files on a Unix system were meant to be text files.&lt;br /&gt;
** Huge files spread over many servers not even in the cards for NFS&lt;br /&gt;
** Meant for small problems, not web-scale&lt;br /&gt;
*** Google has a copy of the publicly accessible internet&lt;br /&gt;
**** Their strategy is to copy the internet to index it&lt;br /&gt;
**** Insane → insane filesystem&lt;br /&gt;
**** One file may span multiple servers&lt;br /&gt;
&lt;br /&gt;
* Even mainframes, scale-up solutions, ultra-reliable systems, with data sets bigger than RAM don&#039;t have the scale of GFS or CEPH.&lt;br /&gt;
&lt;br /&gt;
* Point-to-point access; much less load-balancing, even in AFS&lt;br /&gt;
** One server to service multiple clients.&lt;br /&gt;
** Single point of entry, single point of failure, bottleneck&lt;br /&gt;
&lt;br /&gt;
* Less focus on fault tolerance&lt;br /&gt;
** No notion of data replication.&lt;br /&gt;
&lt;br /&gt;
* Reliability was a property of the host, not the network&lt;br /&gt;
&lt;br /&gt;
==Ceph==&lt;br /&gt;
&lt;br /&gt;
* Ceph is crazy and tries to do everything&lt;br /&gt;
* GFS was very specifically designed to work in a limited scenario, under certain specific conditions, whereas CEPH is sort of generic solution- for how to build a scalable distributed file system&lt;br /&gt;
&lt;br /&gt;
* Achieves high performance, relaibility and availabily through three design features: decoupled data and metadata, dynamically distributed meta data, reliable autonomic distributed object storage.&lt;br /&gt;
** Decoupled data and meta data: Metadata operations (open, close) happen to metadata clusters, clients interact directly with OSD&#039;s for IO.&lt;br /&gt;
** Distributed Meta Data:  Meta data operations make up a lot of work load. Ceph distributes this workload to many Meta Data Servers (MDS) to maintail a file hierarchy.&lt;br /&gt;
** automic object storage: OSD&#039;s organise amongst themselves, taking advantage of their onboard CPU and Memory. Ceph delegates datamigration, replication, failure detection, recovery, to the cluster of OSDs.&lt;br /&gt;
&lt;br /&gt;
* Distributed Meta Data&lt;br /&gt;
** Unlike GFS&lt;br /&gt;
** Clusters of MDSes.&lt;br /&gt;
** Utilizes Dynamic Subtree partitioning: Dynamically mapped subtrees of directories to MDSes. Workloads for every subtree are monitored. Subtrees assigned to MDSes accordingly, in a coarse way.&lt;br /&gt;
&lt;br /&gt;
* Near-Posix like interface: selectively extend interface while relaxing consistency semantics.&lt;br /&gt;
** ex: &#039;&#039;readdirplus&#039;&#039; is an extension which optimizes for a common sequence of operations: &#039;&#039;readdir&#039;&#039; followed by multiple &#039;&#039;stats&#039;&#039;. This requires brief caching to improve performance which may let small concurrent changes to go unnoticed.&lt;br /&gt;
* Object Storage Devices (OSDs) have some intelligence (unlike GFS), and autonomously distribute the data, rather than being controlled by a master.&lt;br /&gt;
** Uses EBOFS (instead of ext3). Implemented in user space to avoid dealing with kernel issues. Aggressively schedules disk writes.&lt;br /&gt;
** Uses hashing in the distribution process to &#039;&#039;&#039;uniformly&#039;&#039;&#039; distribute data&lt;br /&gt;
** The actual algorithm for distributing data is as follows:&lt;br /&gt;
*** file + offset → hash(object ID) → CRUSH(placement group) → OSD&lt;br /&gt;
** Each client has knowledge of the entire storage network&lt;br /&gt;
** Tracks failure groups (same breaker, switch, etc.), hot data, etc.&lt;br /&gt;
** Number of replicas is changeable on the fly, but the placement group is not&lt;br /&gt;
*** For example, if every client on the planet is accessing the same file, you can scale out for that data.&lt;br /&gt;
** You don&#039;t ask where to go, you just go, which makes this very scalable&lt;br /&gt;
&lt;br /&gt;
Any distributed file system that aims to be scalable, need to cut down the number of messages floating around, instead of the actual data transfer, which is what Ceph aims to do with the CRUSH function. basically Client or OSD just need to be aware of this CRUSH algorithm(function) and they can find the location of a file on their own (instead of asking a master server about it), so basically it eliminates the traditional File allocation list approach. &lt;br /&gt;
&lt;br /&gt;
* CRUSH is sufficiently advanced to be called magic.&lt;br /&gt;
** O(log n) of the size of the data&lt;br /&gt;
** CPUs stupidly fast, so the above is of minimal overhead&lt;br /&gt;
*** the network, despite being fast, has latency, etc. &lt;br /&gt;
*** Computation scales much better than communication.&lt;br /&gt;
&lt;br /&gt;
* Storage is composed of variable-length atoms&lt;br /&gt;
&lt;br /&gt;
*A very fun video to learn about ceph and OSD file systems - &lt;br /&gt;
https://www.youtube.com/watch?v=C3lxGuAWEWU&lt;br /&gt;
&lt;br /&gt;
= Class Discussion = &lt;br /&gt;
&lt;br /&gt;
== File Size ==&lt;br /&gt;
In Anil’s opinion “how file system size compares to the server storage size?” is a key parameter that distinguishes GFS, NFS designs from the early file systems NFS, AFS, Plan 9. In the early files system designs, file system size was a fraction of the server storage size where as in GFS and Ceph the file system size can be of several times magnitude than that of the server. &lt;br /&gt;
&lt;br /&gt;
== Segue on drives and sequential access following GFS section ==&lt;br /&gt;
&lt;br /&gt;
* Structure of GFS does match some other modern systems:&lt;br /&gt;
** Hard drives are like parallel tapes, very suited for streaming.&lt;br /&gt;
** Flash devices are log-structured too, but have an abstracting firmware.&lt;br /&gt;
*** They do erasure in bulk, in the &#039;&#039;&#039;background&#039;&#039;&#039;. &lt;br /&gt;
*** Used to be we needed specialized FS for [http://en.wikipedia.org/wiki/Memory_Technology_Device MTDs] to get better performance; though now we have better micro-controllers in some embedded systems to abstract away the hardware.&lt;br /&gt;
* Architectures that start big, often end up in the smallest things, such as in small storage devices.&lt;br /&gt;
&lt;br /&gt;
== Lookups vs hashing ==&lt;br /&gt;
One key aspect in the Ceph design is the attempt to replace communication with computation by using hashing based mechanism CRUSH. Following line from Anil epitomizes the general approach that is followed in the field of Computer Science “If one abstraction does not work stick another one in”.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19083</id>
		<title>DistOS 2014W Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19083"/>
		<updated>2014-04-24T15:59:45Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* GFS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==GFS and Ceph (Feb. 4)==&lt;br /&gt;
* [http://research.google.com/archive/gfs-sosp2003.pdf Sanjay Ghemawat et al., &amp;quot;The Google File System&amp;quot; (SOSP 2003)]&lt;br /&gt;
* [http://www.usenix.org/events/osdi06/tech/weil.html Weil et al., Ceph: A Scalable, High-Performance Distributed File System (OSDI 2006)].&lt;br /&gt;
&lt;br /&gt;
== GFS ==&lt;br /&gt;
GFS is a distributed file system designed specifically for Google&#039; needs and they made two assumption while designing GFS:&lt;br /&gt;
&lt;br /&gt;
# Most of the Data is written in the form of appends ( write at the end of a file).&lt;br /&gt;
# Data read from the files is read in a streaming sort of way ( read lot of data in the form of sequential access).&lt;br /&gt;
&lt;br /&gt;
Because of this, they decided to lay emphasis on better performance for sequential access. These two assumption are also the reason because of which they chose to keep the chunk size so huge (64 MB). You can easily read large blocks if you get rid of random access. Once data is written, it is rarely written &#039;&#039;&#039;over&#039;&#039;&#039; using random access.&lt;br /&gt;
&lt;br /&gt;
* Very different design because of the workload that it is designed for:&lt;br /&gt;
** Because of the number of small files that have to be indexed for the web, it is no longer practical to have a file system that stores these individually. Too much overhead. Easier to store millions of objects as large files. Punts problem to userspace, incl. record delimitation.&lt;br /&gt;
* Don&#039;t care about latency&lt;br /&gt;
** surprising considering it&#039;s Google, the guys who change the TCP IW standard recommendations for latency.&lt;br /&gt;
* Mostly seeking (sequentially) through entire file.&lt;br /&gt;
* Paper from 2003, mentions still using 100BASE-T links.&lt;br /&gt;
* Data-heavy, metadata light (unlike Ceph). Contacting the metadata server is a rare event.&lt;br /&gt;
* Consider hardware failures as normal operating conditions:&lt;br /&gt;
** uses commodity hardware&lt;br /&gt;
** All the replication (!)&lt;br /&gt;
** Data checksumming (few file systems do checksums)&lt;br /&gt;
* Performance degrades for small random access workload; use other filesystem.&lt;br /&gt;
* Path of least resistance to scale, not to do something super CS-smart.&lt;br /&gt;
* Google used to re-index every month, swapping out indexes. Now, it&#039;s much more online. GFS is now just a layer to support a more dynamic layer.&lt;br /&gt;
* The paper seems to lack any mention of security. This FS probably could only exist on a trusted network.&lt;br /&gt;
* Implements interface similar to POSIX, but not the full standard.&lt;br /&gt;
** &#039;&#039;&#039;create, delete, open, close, read, write&#039;&#039;&#039;&lt;br /&gt;
** Unique operations too: &#039;&#039;&#039;snapshot&#039;&#039;&#039; which is low cost file duplication and &#039;&#039;&#039;record append&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How other filesystems compare to GFS and Ceph ==&lt;br /&gt;
&lt;br /&gt;
* Other File Systems: AFS, NFS, Plan 9, traditional Unix&lt;br /&gt;
&lt;br /&gt;
* Data and metadata are held together.&lt;br /&gt;
** They did not optimize for different access patterns:&lt;br /&gt;
*** Data → big, long transfers&lt;br /&gt;
*** Metadata → small, low latency&lt;br /&gt;
** Can&#039;t scale separately&lt;br /&gt;
&lt;br /&gt;
* Designed for lower latency&lt;br /&gt;
&lt;br /&gt;
* (Mostly) designed for POSIX semantics&lt;br /&gt;
** how the requirements that lead to the ‘standard’ evolved&lt;br /&gt;
&lt;br /&gt;
* Assumed that a file is a fraction of the size of a server&lt;br /&gt;
** eg. files on a Unix system were meant to be text files.&lt;br /&gt;
** Huge files spread over many servers not even in the cards for NFS&lt;br /&gt;
** Meant for small problems, not web-scale&lt;br /&gt;
*** Google has a copy of the publicly accessible internet&lt;br /&gt;
**** Their strategy is to copy the internet to index it&lt;br /&gt;
**** Insane → insane filesystem&lt;br /&gt;
**** One file may span multiple servers&lt;br /&gt;
&lt;br /&gt;
* Even mainframes, scale-up solutions, ultra-reliable systems, with data sets bigger than RAM don&#039;t have the scale of GFS or CEPH.&lt;br /&gt;
&lt;br /&gt;
* Point-to-point access; much less load-balancing, even in AFS&lt;br /&gt;
** One server to service multiple clients.&lt;br /&gt;
** Single point of entry, single point of failure, bottleneck&lt;br /&gt;
&lt;br /&gt;
* Less focus on fault tolerance&lt;br /&gt;
** No notion of data replication.&lt;br /&gt;
&lt;br /&gt;
* Reliability was a property of the host, not the network&lt;br /&gt;
&lt;br /&gt;
==Ceph==&lt;br /&gt;
&lt;br /&gt;
* Ceph is crazy and tries to do everything&lt;br /&gt;
* GFS was very specifically designed to work in a limited scenario, under certain specific conditions, whereas CEPH is sort of generic solution- for how to build a scalable distributed file system&lt;br /&gt;
&lt;br /&gt;
* Achieves high performance, relaibility and availabily through three design features: decoupled data and metadata, dynamically distributed meta data, reliable autonomic distributed object storage.&lt;br /&gt;
** Decoupled data and meta data: Metadata operations (open, close) happen to metadata clusters, clients interact directly with OSD&#039;s for IO.&lt;br /&gt;
** Distributed Meta Data:  Meta data operations make up a lot of work load. Ceph distributes this workload to many Meta Data Servers (MDS) to maintail a file hierarchy.&lt;br /&gt;
** automic object storage: OSD&#039;s organise amongst themselves, taking advantage of their onboard CPU and Memory. Ceph delegates datamigration, replication, failure detection, recovery, to the cluster of OSDs.&lt;br /&gt;
&lt;br /&gt;
* Distributed Meta Data&lt;br /&gt;
** Unlike GFS&lt;br /&gt;
** Clusters of MDSes.&lt;br /&gt;
** Utilizes Dynamic Subtree partitioning: Dynamically mapped subtrees of directories to MDSes. Workloads for every subtree are monitored. Subtrees assigned to MDSes accordingly, in a coarse way.&lt;br /&gt;
&lt;br /&gt;
* Near-Posix like interface: selectively extend interface while relaxing consistency semantics.&lt;br /&gt;
** ex: &#039;&#039;readdirplus&#039;&#039; is an extension which optimizes for a common sequence of operations: &#039;&#039;readdir&#039;&#039; followed by multiple &#039;&#039;stats&#039;&#039;. This requires brief caching to improve performance which may let small concurrent changes to go unnoticed.&lt;br /&gt;
* Object Storage Devices (OSDs) have some intelligence (unlike GFS), and autonomously distribute the data, rather than being controlled by a master.&lt;br /&gt;
** Uses EBOFS (instead of ext3). Implemented in user space to avoid dealing with kernel issues. Aggressively schedules disk writes.&lt;br /&gt;
** Uses hashing in the distribution process to &#039;&#039;&#039;uniformly&#039;&#039;&#039; distribute data&lt;br /&gt;
** The actual algorithm for distributing data is as follows:&lt;br /&gt;
*** file + offset → hash(object ID) → CRUSH(placement group) → OSD&lt;br /&gt;
** Each client has knowledge of the entire storage network&lt;br /&gt;
** Tracks failure groups (same breaker, switch, etc.), hot data, etc.&lt;br /&gt;
** Number of replicas is changeable on the fly, but the placement group is not&lt;br /&gt;
*** For example, if every client on the planet is accessing the same file, you can scale out for that data.&lt;br /&gt;
** You don&#039;t ask where to go, you just go, which makes this very scalable&lt;br /&gt;
&lt;br /&gt;
Any distributed file system that aims to be scalable, need to cut down the number of messages floating around, instead of the actual data transfer, which is what Ceph aims to do with the CRUSH function. basically Client or OSD just need to be aware of this CRUSH algorithm(function) and they can find the location of a file on their own (instead of asking a master server about it), so basically it eliminates the traditional File allocation list approach. &lt;br /&gt;
&lt;br /&gt;
* CRUSH is sufficiently advanced to be called magic.&lt;br /&gt;
** O(log n) of the size of the data&lt;br /&gt;
** CPUs stupidly fast, so the above is of minimal overhead&lt;br /&gt;
*** the network, despite being fast, has latency, etc. &lt;br /&gt;
*** Computation scales much better than communication.&lt;br /&gt;
&lt;br /&gt;
* Storage is composed of variable-length atoms&lt;br /&gt;
&lt;br /&gt;
*A very fun video to learn about ceph and OSD file systems - &lt;br /&gt;
https://www.youtube.com/watch?v=C3lxGuAWEWU&lt;br /&gt;
&lt;br /&gt;
= Class Discussion = &lt;br /&gt;
&lt;br /&gt;
== File Size ==&lt;br /&gt;
In Anil’s opinion “how file system size compares to the server storage size?” is a key parameter that distinguishes GFS, NFS designs from the early file systems NFS, AFS, Plan 9. In the early files system designs, file system size was a fraction of the server storage size where as in GFS and Ceph the file system size can be of several times magnitude than that of the server. &lt;br /&gt;
&lt;br /&gt;
== Segue on drives and sequential access following GFS section ==&lt;br /&gt;
&lt;br /&gt;
* Structure of GFS does match some other modern systems:&lt;br /&gt;
** Hard drives are like parallel tapes, very suited for streaming.&lt;br /&gt;
** Flash devices are log-structured too, but have an abstracting firmware.&lt;br /&gt;
*** They do erasure in bulk, in the &#039;&#039;&#039;background&#039;&#039;&#039;. &lt;br /&gt;
*** Used to be we needed specialized FS for [http://en.wikipedia.org/wiki/Memory_Technology_Device MTDs] to get better performance; though now we have better micro-controllers in some embedded systems to abstract away the hardware.&lt;br /&gt;
* Architectures that start big, often end up in the smallest things.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lookups vs hashing ==&lt;br /&gt;
One key aspect in the Ceph design is the attempt to replace communication with computation by using hashing based mechanism CRUSH. Following line from Anil epitomizes the general approach that is followed in the field of Computer Science “If one abstraction does not work stick another one in”.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19082</id>
		<title>DistOS 2014W Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_10&amp;diff=19082"/>
		<updated>2014-04-24T15:56:17Z</updated>

		<summary type="html">&lt;p&gt;36chambers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==GFS and Ceph (Feb. 4)==&lt;br /&gt;
* [http://research.google.com/archive/gfs-sosp2003.pdf Sanjay Ghemawat et al., &amp;quot;The Google File System&amp;quot; (SOSP 2003)]&lt;br /&gt;
* [http://www.usenix.org/events/osdi06/tech/weil.html Weil et al., Ceph: A Scalable, High-Performance Distributed File System (OSDI 2006)].&lt;br /&gt;
&lt;br /&gt;
== GFS ==&lt;br /&gt;
GFS is a distributed file system designed specifically for Google&#039; needs and they made two assumption while designing GFS:&lt;br /&gt;
&lt;br /&gt;
# Most of the Data is written in the form of appends ( write at the end of a file).&lt;br /&gt;
# Data read from the files is read in a streaming sort of way ( read lot of data in the form of sequential access).&lt;br /&gt;
&lt;br /&gt;
Because of this, they decided to lay emphasis on better performance for sequential access. These two assumption are also the reason because of which they chose to keep the chunk size so huge (64 MB). You can easily read large blocks if you get rid of random access. Once data is written, it is rarely written &#039;&#039;&#039;over&#039;&#039;&#039; using random access.&lt;br /&gt;
&lt;br /&gt;
* Very different design because of the workload that it is designed for:&lt;br /&gt;
** Because of the number of small files that have to be indexed for the web, it is no longer practical to have a file system that stores these individually. Too much overhead. Easier to store millions of objects as large files. Punts problem to userspace, incl. record delimitation.&lt;br /&gt;
* Don&#039;t care about latency&lt;br /&gt;
** surprising considering it&#039;s Google, the guys who change the TCP IW standard recommendations for latency.&lt;br /&gt;
* Mostly seeking (sequentially) through entire file.&lt;br /&gt;
* Paper from 2003, mentions still using 100BASE-T links.&lt;br /&gt;
* Data-heavy, metadata light. Contacting the metadata server is a rare event.&lt;br /&gt;
* Consider hardware failures as normal operating conditions:&lt;br /&gt;
** uses commodity hardware&lt;br /&gt;
** All the replication (!)&lt;br /&gt;
** Data checksumming (few file systems do checksums)&lt;br /&gt;
* Performance degrades for small random access workload; use other filesystem.&lt;br /&gt;
* Path of least resistance to scale, not to do something super CS-smart.&lt;br /&gt;
* Google used to re-index every month, swapping out indexes. Now, it&#039;s much more online. GFS is now just a layer to support a more dynamic layer.&lt;br /&gt;
* The paper seems to lack any mention of security. This FS probably could only exist on a trusted network.&lt;br /&gt;
* Implements interface similar to POSIX, but not the full standard.&lt;br /&gt;
** &#039;&#039;&#039;create, delete, open, close, read, write&#039;&#039;&#039;&lt;br /&gt;
** Unique operations too: &#039;&#039;&#039;snapshot&#039;&#039;&#039; which is low cost file duplication and &#039;&#039;&#039;record append&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How other filesystems compare to GFS and Ceph ==&lt;br /&gt;
&lt;br /&gt;
* Other File Systems: AFS, NFS, Plan 9, traditional Unix&lt;br /&gt;
&lt;br /&gt;
* Data and metadata are held together.&lt;br /&gt;
** They did not optimize for different access patterns:&lt;br /&gt;
*** Data → big, long transfers&lt;br /&gt;
*** Metadata → small, low latency&lt;br /&gt;
** Can&#039;t scale separately&lt;br /&gt;
&lt;br /&gt;
* Designed for lower latency&lt;br /&gt;
&lt;br /&gt;
* (Mostly) designed for POSIX semantics&lt;br /&gt;
** how the requirements that lead to the ‘standard’ evolved&lt;br /&gt;
&lt;br /&gt;
* Assumed that a file is a fraction of the size of a server&lt;br /&gt;
** eg. files on a Unix system were meant to be text files.&lt;br /&gt;
** Huge files spread over many servers not even in the cards for NFS&lt;br /&gt;
** Meant for small problems, not web-scale&lt;br /&gt;
*** Google has a copy of the publicly accessible internet&lt;br /&gt;
**** Their strategy is to copy the internet to index it&lt;br /&gt;
**** Insane → insane filesystem&lt;br /&gt;
**** One file may span multiple servers&lt;br /&gt;
&lt;br /&gt;
* Even mainframes, scale-up solutions, ultra-reliable systems, with data sets bigger than RAM don&#039;t have the scale of GFS or CEPH.&lt;br /&gt;
&lt;br /&gt;
* Point-to-point access; much less load-balancing, even in AFS&lt;br /&gt;
** One server to service multiple clients.&lt;br /&gt;
** Single point of entry, single point of failure, bottleneck&lt;br /&gt;
&lt;br /&gt;
* Less focus on fault tolerance&lt;br /&gt;
** No notion of data replication.&lt;br /&gt;
&lt;br /&gt;
* Reliability was a property of the host, not the network&lt;br /&gt;
&lt;br /&gt;
==Ceph==&lt;br /&gt;
&lt;br /&gt;
* Ceph is crazy and tries to do everything&lt;br /&gt;
* GFS was very specifically designed to work in a limited scenario, under certain specific conditions, whereas CEPH is sort of generic solution- for how to build a scalable distributed file system&lt;br /&gt;
&lt;br /&gt;
* Achieves high performance, relaibility and availabily through three design features: decoupled data and metadata, dynamically distributed meta data, reliable autonomic distributed object storage.&lt;br /&gt;
** Decoupled data and meta data: Metadata operations (open, close) happen to metadata clusters, clients interact directly with OSD&#039;s for IO.&lt;br /&gt;
** Distributed Meta Data:  Meta data operations make up a lot of work load. Ceph distributes this workload to many Meta Data Servers (MDS) to maintail a file hierarchy.&lt;br /&gt;
** automic object storage: OSD&#039;s organise amongst themselves, taking advantage of their onboard CPU and Memory. Ceph delegates datamigration, replication, failure detection, recovery, to the cluster of OSDs.&lt;br /&gt;
&lt;br /&gt;
* Distributed Meta Data&lt;br /&gt;
** Unlike GFS&lt;br /&gt;
** Clusters of MDSes.&lt;br /&gt;
** Utilizes Dynamic Subtree partitioning: Dynamically mapped subtrees of directories to MDSes. Workloads for every subtree are monitored. Subtrees assigned to MDSes accordingly, in a coarse way.&lt;br /&gt;
&lt;br /&gt;
* Near-Posix like interface: selectively extend interface while relaxing consistency semantics.&lt;br /&gt;
** ex: &#039;&#039;readdirplus&#039;&#039; is an extension which optimizes for a common sequence of operations: &#039;&#039;readdir&#039;&#039; followed by multiple &#039;&#039;stats&#039;&#039;. This requires brief caching to improve performance which may let small concurrent changes to go unnoticed.&lt;br /&gt;
* Object Storage Devices (OSDs) have some intelligence (unlike GFS), and autonomously distribute the data, rather than being controlled by a master.&lt;br /&gt;
** Uses EBOFS (instead of ext3). Implemented in user space to avoid dealing with kernel issues. Aggressively schedules disk writes.&lt;br /&gt;
** Uses hashing in the distribution process to &#039;&#039;&#039;uniformly&#039;&#039;&#039; distribute data&lt;br /&gt;
** The actual algorithm for distributing data is as follows:&lt;br /&gt;
*** file + offset → hash(object ID) → CRUSH(placement group) → OSD&lt;br /&gt;
** Each client has knowledge of the entire storage network&lt;br /&gt;
** Tracks failure groups (same breaker, switch, etc.), hot data, etc.&lt;br /&gt;
** Number of replicas is changeable on the fly, but the placement group is not&lt;br /&gt;
*** For example, if every client on the planet is accessing the same file, you can scale out for that data.&lt;br /&gt;
** You don&#039;t ask where to go, you just go, which makes this very scalable&lt;br /&gt;
&lt;br /&gt;
Any distributed file system that aims to be scalable, need to cut down the number of messages floating around, instead of the actual data transfer, which is what Ceph aims to do with the CRUSH function. basically Client or OSD just need to be aware of this CRUSH algorithm(function) and they can find the location of a file on their own (instead of asking a master server about it), so basically it eliminates the traditional File allocation list approach. &lt;br /&gt;
&lt;br /&gt;
* CRUSH is sufficiently advanced to be called magic.&lt;br /&gt;
** O(log n) of the size of the data&lt;br /&gt;
** CPUs stupidly fast, so the above is of minimal overhead&lt;br /&gt;
*** the network, despite being fast, has latency, etc. &lt;br /&gt;
*** Computation scales much better than communication.&lt;br /&gt;
&lt;br /&gt;
* Storage is composed of variable-length atoms&lt;br /&gt;
&lt;br /&gt;
*A very fun video to learn about ceph and OSD file systems - &lt;br /&gt;
https://www.youtube.com/watch?v=C3lxGuAWEWU&lt;br /&gt;
&lt;br /&gt;
= Class Discussion = &lt;br /&gt;
&lt;br /&gt;
== File Size ==&lt;br /&gt;
In Anil’s opinion “how file system size compares to the server storage size?” is a key parameter that distinguishes GFS, NFS designs from the early file systems NFS, AFS, Plan 9. In the early files system designs, file system size was a fraction of the server storage size where as in GFS and Ceph the file system size can be of several times magnitude than that of the server. &lt;br /&gt;
&lt;br /&gt;
== Segue on drives and sequential access following GFS section ==&lt;br /&gt;
&lt;br /&gt;
* Structure of GFS does match some other modern systems:&lt;br /&gt;
** Hard drives are like parallel tapes, very suited for streaming.&lt;br /&gt;
** Flash devices are log-structured too, but have an abstracting firmware.&lt;br /&gt;
*** They do erasure in bulk, in the &#039;&#039;&#039;background&#039;&#039;&#039;. &lt;br /&gt;
*** Used to be we needed specialized FS for [http://en.wikipedia.org/wiki/Memory_Technology_Device MTDs] to get better performance; though now we have better micro-controllers in some embedded systems to abstract away the hardware.&lt;br /&gt;
* Architectures that start big, often end up in the smallest things.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lookups vs hashing ==&lt;br /&gt;
One key aspect in the Ceph design is the attempt to replace communication with computation by using hashing based mechanism CRUSH. Following line from Anil epitomizes the general approach that is followed in the field of Computer Science “If one abstraction does not work stick another one in”.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19080</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19080"/>
		<updated>2014-04-24T15:46:35Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in computer science, particularly software tools, its important to have more than one good implementation. Only time it doesn&#039;t happen is because of market interference. An example of this was Microsoft Word, which took out most of the competition. Different implementations are commonly due to programmers disagreeing on concepts and designs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for low latency online updates and more optimized for reads. &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability. This extensibility only has use under undesirable circumstances.&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19079</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19079"/>
		<updated>2014-04-24T15:45:10Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* A brief look at Open Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in computer science, particularly software tools, its important to have more than one good implementation. Only time it doesn&#039;t happen is because of market interference. An example of this was Microsoft Word, which took out most of the competition. Different implementations are commonly due to programmers disagreeing on concepts and designs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for low latency online updates and more optimized for reads. &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19078</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19078"/>
		<updated>2014-04-24T15:43:04Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* A brief look at Open Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in computer science, particularly software tools, its important to have more than one good implementation. Only time it doesn&#039;t happen is because of market interference. An example of this was Microsoft Word, which took out most of the competition. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for low latency online updates and more optimized for reads. &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19077</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19077"/>
		<updated>2014-04-24T15:40:18Z</updated>

		<summary type="html">&lt;p&gt;36chambers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in CS, particularly software tools, its actually important to have more than one good implementation. Only time it doesn&#039;t happen because of market realities. &lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for low latency online updates and more optimized for reads. &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19076</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19076"/>
		<updated>2014-04-24T15:36:48Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* A brief look at Open Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
*GFS optimized for data-analyst cluster &lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in CS, particularly software tools, its actually important to have more than one good implementation. Only time it doesn&#039;t happen because of market realities. &lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for online updates for interactive lower latency &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19075</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19075"/>
		<updated>2014-04-24T15:26:25Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* A brief look at Open Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about Google versus Facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used it for competitive advantage.&lt;br /&gt;
* People try to implement Google’s methods in the open source arena.&lt;br /&gt;
* Google had proprietary technology but Facebook wanted open design.&lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up. &lt;br /&gt;
*Facebook managed to be very good with contributing to open source projects that they were a part of, and stay alive in the open source projects that they start. Google works internally and pitches their code &amp;quot;over the fence.&amp;quot;&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in CS, particularly software tools, its actually important to have more than one good implementation. Only time it doesn&#039;t happen because of market realities. &lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for online updates for interactive lower latency &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19074</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19074"/>
		<updated>2014-04-24T15:18:45Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* A brief look at Open Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about google versus facebook approach to technologies.&lt;br /&gt;
* Google has been good at publishing papers on their distributed system structure. &lt;br /&gt;
* Google developed its technology internally and used for competitive advantage. &lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up.&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in CS, particularly software tools, its actually important to have more than one good implementation. Only time it doesn&#039;t happen because of market realities. &lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for online updates for interactive lower latency &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19073</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19073"/>
		<updated>2014-04-24T15:11:35Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class Feedback for the Course */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra (hard to parallelize)&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo (parallelizable)&lt;br /&gt;
&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;br /&gt;
&lt;br /&gt;
==Summary==&lt;br /&gt;
Standard programming languages can be modeled as finite state machines (their execution models are finite state machines). It is advisable to concede defeat in the area of parallelizing finite state machines and focus on parallelizing other areas. We need to find multiple system architectures that can deal with each of these problems. How do we find abstractions to deal with these things?&lt;br /&gt;
&lt;br /&gt;
Generalizations are rarely generalized in the right way. Generalization has to be made at some level, but at the same time, you do not want to make generalizations without a clear justification. Otherwise, you’re likely to generalize it wrong. &lt;br /&gt;
&lt;br /&gt;
==Class Feedback for the Course==&lt;br /&gt;
* Challenging questions given in advance to lead and direct the group discussions&lt;br /&gt;
* Having required questions on the Wiki&lt;br /&gt;
* Share and highlight any good reading responses with the rest of the class&lt;br /&gt;
* Having essay assignments replace essay format of midterm&lt;br /&gt;
* Incorporate more information about the history of computer science&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19072</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19072"/>
		<updated>2014-04-24T15:09:41Z</updated>

		<summary type="html">&lt;p&gt;36chambers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra (hard to parallelize)&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo (parallelizable)&lt;br /&gt;
&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;br /&gt;
&lt;br /&gt;
==Summary==&lt;br /&gt;
Standard programming languages can be modeled as finite state machines (their execution models are finite state machines). It is advisable to concede defeat in the area of parallelizing finite state machines and focus on parallelizing other areas. We need to find multiple system architectures that can deal with each of these problems. How do we find abstractions to deal with these things?&lt;br /&gt;
&lt;br /&gt;
Generalizations are rarely generalized in the right way. Generalization has to be made at some level, but at the same time, you do not want to make generalizations without a clear justification. Otherwise, you’re likely to generalize it wrong. &lt;br /&gt;
&lt;br /&gt;
==Class Feedback for the Course==&lt;br /&gt;
* Challenging questions given in advance to lead and direct the group discussions&lt;br /&gt;
* Having required questions on the Wiki&lt;br /&gt;
* Share and highlight any good reading responses with the rest of the class&lt;br /&gt;
* Having essay assignments &lt;br /&gt;
* Incorporate more information about the history of computer science&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19071</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19071"/>
		<updated>2014-04-24T14:55:15Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* The Landscape of Parallel Computing Research: A View from Berkeley */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* Standard programming languages can be modeled as finite state machines (their execution models are finite state machines)&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Better to concede defeat in the area of parallelizing finite state machines and focus on parallelizing other areas&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra (hard to parallelize)&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo (parallelizable)&lt;br /&gt;
&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19068</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19068"/>
		<updated>2014-04-24T14:34:15Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Engelbart died on July 2, 2013.&lt;br /&gt;
* Engelbart himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* “The Mother of all demos” is a nickname for Engelbart&#039;s 1968 computer demonstration at the JCC in San Francisco. Engelbart had a vision of utilizing computers to augment human intelligence.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies that required further investment got left behind.&lt;br /&gt;
*** New languages used to be created all the time, whereas now it`s been standardized.&lt;br /&gt;
*** The mouse got picked up quickly, but hyperlinks only got picked up later on.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
* Other interesting ideas in this work:&lt;br /&gt;
** &amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
** Engelbart thought of 3D scrolling (scrolling as in user movement/navigation in,, for instance, a browser), whereas we still limit scrolling to 2D. &lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
** 1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
** 2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
** 3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
*** a) hypertext links&lt;br /&gt;
*** b) the mouse &lt;br /&gt;
*** c) raster-scan video monitors &lt;br /&gt;
*** d) information organized by relevance &lt;br /&gt;
*** e) screen windowing&lt;br /&gt;
*** f) presentation programs &lt;br /&gt;
*** g) other modern computing concepts&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19066</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19066"/>
		<updated>2014-04-24T14:33:35Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Engelbart died on July 2, 2013.&lt;br /&gt;
* Engelbart himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* “The Mother of all demos” is a nickname for Engelbart&#039;s 1968 computer demonstration at the JCC in San Francisco. Engelbart had a vision of utilizing computers to augment human intelligence.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies that required further investment got left behind.&lt;br /&gt;
*** New languages used to be created all the time, whereas now it`s been standardized.&lt;br /&gt;
*** The mouse got picked up quickly, but hyperlinks only got picked up later on.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
* Other interesting ideas in this work:&lt;br /&gt;
** &amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
** Engelbart thought of 3D scrolling (scrolling as in user movement/navigation in,, for instance, a browser), whereas we still limit scrolling to 2D. &lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
** 1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
** 2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
** 3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
*** a) hypertext links,&lt;br /&gt;
*** b) the mouse, &lt;br /&gt;
*** c) raster-scan video monitors, &lt;br /&gt;
*** d) information organized by relevance, &lt;br /&gt;
*** e) screen windowing, &lt;br /&gt;
*** f) presentation programs, &lt;br /&gt;
*** g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19065</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19065"/>
		<updated>2014-04-24T14:30:57Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Engelbart died on July 2, 2013.&lt;br /&gt;
* Engelbart himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* “The Mother of all demos” is a nickname for Engelbart&#039;s 1968 computer demonstration at the JCC in San Francisco. Engelbart had a vision of utilizing computers to augment human intelligence.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies that required further investment got left behind.&lt;br /&gt;
*** New languages used to be created all the time, whereas now it`s been standardized.&lt;br /&gt;
*** The mouse got picked up quickly, but hyperlinks only got picked up later on.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
* Other interesting ideas in this work:&lt;br /&gt;
** &amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
** Engelbart thought of 3D scrolling (scrolling as in user movement/navigation in,, for instance, a browser), whereas we still limit scrolling to 2D. &lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19064</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19064"/>
		<updated>2014-04-24T14:21:50Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Engelbart died on July 2, 2013.&lt;br /&gt;
* Engelbart himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies that required further investment got left behind.&lt;br /&gt;
*** New languages used to be created all the time, whereas now it`s been standardized.&lt;br /&gt;
*** The mouse got picked up quickly, but hyperlinks only got picked up later on.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19063</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19063"/>
		<updated>2014-04-24T14:18:24Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Engelbart died on July 2, 2013.&lt;br /&gt;
* Engelbart himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies that required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19062</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19062"/>
		<updated>2014-04-24T14:13:01Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. So essentially, NLS didn`t have resource sharing since it was just one machine (i.e., it`s all shared). Alto had the idea of sharing via the network (i.e., printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19061</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19061"/>
		<updated>2014-04-24T14:08:14Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19060</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19060"/>
		<updated>2014-04-24T14:06:48Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19059</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19059"/>
		<updated>2014-04-24T14:06:26Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19058</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19058"/>
		<updated>2014-04-24T14:05:57Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Application&lt;br /&gt;
! Alto&lt;br /&gt;
! NLS&lt;br /&gt;
|-&lt;br /&gt;
| Text Processing&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Drawing (Graphics)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Programming Environments&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Documentation&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Email (Communication)&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| Reading&lt;br /&gt;
| ✔&lt;br /&gt;
| ✔&lt;br /&gt;
|-&lt;br /&gt;
| WYSIWYG&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Hardware Design&lt;br /&gt;
| ✔&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19057</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19057"/>
		<updated>2014-04-24T13:56:28Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039; and less on navigating deep information. It was designed around the idea of paper. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19056</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19056"/>
		<updated>2014-04-24T13:54:50Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Alto vs NLS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039;. Less about navigating deep information.  It used the paper metaphor. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper. It is important and interesting to make a note of technology that falls away or becomes obsolete.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19055</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19055"/>
		<updated>2014-04-24T13:52:23Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Printer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==The Alto (Jan. 16)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/alto.pdf Thacker et al., &amp;quot;Alto: A Personal computer&amp;quot; (1979)]  ([https://archive.org/details/bitsavers_xeroxparcttoAPersonalComputer_6560658 archive.org])&lt;br /&gt;
&lt;br /&gt;
Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
To make use of disk controller read,write,truncate,delete and etc. commands were made available.To reduce the risk of global damage structural information was saved to label in each page.hints mechanism was also a available using directory get where file resides in disk.file integrity a was check using seal bit and label.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
Although the original motive of Alto as a personal computer was to serve the needs of a single user, it was figured out that communicating with other Alto’s/computers would facilitate resource sharing – for collaboration and economic reasons. The main design objectives for the computer network connecting personal computers (Altos) were:&lt;br /&gt;
&lt;br /&gt;
Data transmission speed: Bandwidth which should at least match the memory bus speed to give the end user a consistent notion that the resources accessed over the network should also have the same latency as compared to resource accessed within the computer&lt;br /&gt;
&lt;br /&gt;
Size of network: Capability to connect large number of nodes together&lt;br /&gt;
&lt;br /&gt;
Reliability: Once the user starts to use resources/service over a network it is vital to ensure that the network is reliable enough so that the user gets the quality of service required.&lt;br /&gt;
&lt;br /&gt;
Alto uses a general packet transport system which can be thought of as a set of standard communication protocols towards facilitating interoperability. &lt;br /&gt;
&lt;br /&gt;
The key element enabling the communication system between Alto and other computers was the Ethernet, a Layer 2 protocol and mechanism developed in-house at Xerox by Robert Metcalf et al. Following are the characteristics of the Ethernet – Broadcast, packet-switched network with bandwidth – 3Mbits/sec which can connect 256 computers together, and allows up have a distance of 1 Km between two connected nodes. Another important aspect of Ethernet was new nodes/computers could be added/removed/powered-on/powered-off from the network without disturbing the already existing network communications. Since Ethernet offered only best effort service without guarantees for an error free service, towards achieving reliable communication over it a hierarchy of layered communication protocols were implemented in Alto.&lt;br /&gt;
&lt;br /&gt;
Alto had the capability to act as a gateway connecting different networks together. Xerox had a “Xerox Internet” consisting of several hundred computers, 25 networks and 20 gateways providing internet service back in 1979. &lt;br /&gt;
&lt;br /&gt;
Ethernet communications system had two components – Ethernet controller and transceiver. Ethernet controller performed the encoding/decoding, buffering and micromachine interfacing functionalities whereas the transceiver deals with the transmission/reception of bits, which operated in half-duplex mode. &lt;br /&gt;
&lt;br /&gt;
One important different with respect to the design of the Ethernet controller task as opposed to the ones for display and disk were that there were no periodic events to wake this task up instead a S-group instruction was used to set a flip flop in Ethernet hardware which was used to wake up the Ethernet controller task. Also the Ethernet used interrupt based mechanism used to indicate completion since the packet reception/transmission happens asynchronously. Ethernet microcode implements a packet filtering mechanism which checks for the reception of (1) destined for the host (2) broadcast packets. It can also operate in a promiscuous mode with host address set to zero receiving all packets, which can be used for debugging purposes.&lt;br /&gt;
&lt;br /&gt;
Ethernet had no security mechanism built into it. Since Ethernet was a collision domain an exponential backoff algorithm was implemented towards avoiding collisions (which occurs when two Ethernet transmitters tries to use the ether at the same time).&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 1000 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today) which were stored as 8 bits in memory. They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. There was only enough memory to print in bands, where each band was loaded in after the previous one was printed. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;br /&gt;
&lt;br /&gt;
=== Emulation ===&lt;br /&gt;
A notable feature is that the Alto implemented a BCPL emulator in the PROM microstore. Other emulators were available, but they were loaded in RAM. BCPL was used as the main implementation language for the computer&#039;s applications. Very little assembly was used.&lt;br /&gt;
&lt;br /&gt;
=== Programming Environments ===&lt;br /&gt;
The Alto ran the gamut of available programming environment. A conventional toolchain implemented in BCPL was offered (ie. compiler, linker, debugger, file manager, etc.) Interactive programming environments such as Smalltalk and Interlisp were also available. These fell pray to the Alto&#039;s limited main memory (64k) and suffered crippling performance issues.&lt;br /&gt;
&lt;br /&gt;
The only standardized facilities for programming environments to use was the file system and communication protocols. All other hardware had to be accessed using custom methods.&lt;br /&gt;
&lt;br /&gt;
=== Personal Applications ===&lt;br /&gt;
Applications made use of the display, mouse, and keyboard. They were mostly involved with document production. For example, there was a text editor where the user could specify formatting and typefaces. The PC also helped facilitate and automate aspects of logic board design and assembling.&lt;br /&gt;
&lt;br /&gt;
=== Communication in applications ===&lt;br /&gt;
Most applications were designed with the assumption that the computer would exploit networked resources. For example, printing services would be handled by a printing server, file storage could be local or distributed. The Alto made use of existing services too. Its clock was set by a &#039;time of day&#039; service and it could be bootstrapped over ethernet.&lt;br /&gt;
&lt;br /&gt;
Communication in applications was also used in new and novel ways. For example, the debugger mentioned above was network aware. It could help programmers debug software remotely.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19043</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19043"/>
		<updated>2014-04-21T10:18:40Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Mouse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==The Alto (Jan. 16)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/alto.pdf Thacker et al., &amp;quot;Alto: A Personal computer&amp;quot; (1979)]  ([https://archive.org/details/bitsavers_xeroxparcttoAPersonalComputer_6560658 archive.org])&lt;br /&gt;
&lt;br /&gt;
Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
To make use of disk controller read,write,truncate,delete and etc. commands were made available.To reduce the risk of global damage structural information was saved to label in each page.hints mechanism was also a available using directory get where file resides in disk.file integrity a was check using seal bit and label.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
Although the original motive of Alto as a personal computer was to serve the needs of a single user, it was figured out that communicating with other Alto’s/computers would facilitate resource sharing – for collaboration and economic reasons. The main design objectives for the computer network connecting personal computers (Altos) were:&lt;br /&gt;
&lt;br /&gt;
Data transmission speed: Bandwidth which should at least match the memory bus speed to give the end user a consistent notion that the resources accessed over the network should also have the same latency as compared to resource accessed within the computer&lt;br /&gt;
&lt;br /&gt;
Size of network: Capability to connect large number of nodes together&lt;br /&gt;
&lt;br /&gt;
Reliability: Once the user starts to use resources/service over a network it is vital to ensure that the network is reliable enough so that the user gets the quality of service required.&lt;br /&gt;
&lt;br /&gt;
Alto uses a general packet transport system which can be thought of as a set of standard communication protocols towards facilitating interoperability. &lt;br /&gt;
&lt;br /&gt;
The key element enabling the communication system between Alto and other computers was the Ethernet, a Layer 2 protocol and mechanism developed in-house at Xerox by Robert Metcalf et al. Following are the characteristics of the Ethernet – Broadcast, packet-switched network with bandwidth – 3Mbits/sec which can connect 256 computers together, and allows up have a distance of 1 Km between two connected nodes. Another important aspect of Ethernet was new nodes/computers could be added/removed/powered-on/powered-off from the network without disturbing the already existing network communications. Since Ethernet offered only best effort service without guarantees for an error free service, towards achieving reliable communication over it a hierarchy of layered communication protocols were implemented in Alto.&lt;br /&gt;
&lt;br /&gt;
Alto had the capability to act as a gateway connecting different networks together. Xerox had a “Xerox Internet” consisting of several hundred computers, 25 networks and 20 gateways providing internet service back in 1979. &lt;br /&gt;
&lt;br /&gt;
Ethernet communications system had two components – Ethernet controller and transceiver. Ethernet controller performed the encoding/decoding, buffering and micromachine interfacing functionalities whereas the transceiver deals with the transmission/reception of bits, which operated in half-duplex mode. &lt;br /&gt;
&lt;br /&gt;
One important different with respect to the design of the Ethernet controller task as opposed to the ones for display and disk were that there were no periodic events to wake this task up instead a S-group instruction was used to set a flip flop in Ethernet hardware which was used to wake up the Ethernet controller task. Also the Ethernet used interrupt based mechanism used to indicate completion since the packet reception/transmission happens asynchronously. Ethernet microcode implements a packet filtering mechanism which checks for the reception of (1) destined for the host (2) broadcast packets. It can also operate in a promiscuous mode with host address set to zero receiving all packets, which can be used for debugging purposes.&lt;br /&gt;
&lt;br /&gt;
Ethernet had no security mechanism built into it. Since Ethernet was a collision domain an exponential backoff algorithm was implemented towards avoiding collisions (which occurs when two Ethernet transmitters tries to use the ether at the same time).&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 1000 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today) which were stored as 8 bits in memory. They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;br /&gt;
&lt;br /&gt;
=== Emulation ===&lt;br /&gt;
A notable feature is that the Alto implemented a BCPL emulator in the PROM microstore. Other emulators were available, but they were loaded in RAM. BCPL was used as the main implementation language for the computer&#039;s applications. Very little assembly was used.&lt;br /&gt;
&lt;br /&gt;
=== Programming Environments ===&lt;br /&gt;
The Alto ran the gamut of available programming environment. A conventional toolchain implemented in BCPL was offered (ie. compiler, linker, debugger, file manager, etc.) Interactive programming environments such as Smalltalk and Interlisp were also available. These fell pray to the Alto&#039;s limited main memory (64k) and suffered crippling performance issues.&lt;br /&gt;
&lt;br /&gt;
The only standardized facilities for programming environments to use was the file system and communication protocols. All other hardware had to be accessed using custom methods.&lt;br /&gt;
&lt;br /&gt;
=== Personal Applications ===&lt;br /&gt;
Applications made use of the display, mouse, and keyboard. They were mostly involved with document production. For example, there was a text editor where the user could specify formatting and typefaces. The PC also helped facilitate and automate aspects of logic board design and assembling.&lt;br /&gt;
&lt;br /&gt;
=== Communication in applications ===&lt;br /&gt;
Most applications were designed with the assumption that the computer would exploit networked resources. For example, printing services would be handled by a printing server, file storage could be local or distributed. The Alto made use of existing services too. Its clock was set by a &#039;time of day&#039; service and it could be bootstrapped over ethernet.&lt;br /&gt;
&lt;br /&gt;
Communication in applications was also used in new and novel ways. For example, the debugger mentioned above was network aware. It could help programmers debug software remotely.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19042</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=19042"/>
		<updated>2014-04-21T10:13:28Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Mouse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==The Alto (Jan. 16)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/alto.pdf Thacker et al., &amp;quot;Alto: A Personal computer&amp;quot; (1979)]  ([https://archive.org/details/bitsavers_xeroxparcttoAPersonalComputer_6560658 archive.org])&lt;br /&gt;
&lt;br /&gt;
Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
To make use of disk controller read,write,truncate,delete and etc. commands were made available.To reduce the risk of global damage structural information was saved to label in each page.hints mechanism was also a available using directory get where file resides in disk.file integrity a was check using seal bit and label.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
Although the original motive of Alto as a personal computer was to serve the needs of a single user, it was figured out that communicating with other Alto’s/computers would facilitate resource sharing – for collaboration and economic reasons. The main design objectives for the computer network connecting personal computers (Altos) were:&lt;br /&gt;
&lt;br /&gt;
Data transmission speed: Bandwidth which should at least match the memory bus speed to give the end user a consistent notion that the resources accessed over the network should also have the same latency as compared to resource accessed within the computer&lt;br /&gt;
&lt;br /&gt;
Size of network: Capability to connect large number of nodes together&lt;br /&gt;
&lt;br /&gt;
Reliability: Once the user starts to use resources/service over a network it is vital to ensure that the network is reliable enough so that the user gets the quality of service required.&lt;br /&gt;
&lt;br /&gt;
Alto uses a general packet transport system which can be thought of as a set of standard communication protocols towards facilitating interoperability. &lt;br /&gt;
&lt;br /&gt;
The key element enabling the communication system between Alto and other computers was the Ethernet, a Layer 2 protocol and mechanism developed in-house at Xerox by Robert Metcalf et al. Following are the characteristics of the Ethernet – Broadcast, packet-switched network with bandwidth – 3Mbits/sec which can connect 256 computers together, and allows up have a distance of 1 Km between two connected nodes. Another important aspect of Ethernet was new nodes/computers could be added/removed/powered-on/powered-off from the network without disturbing the already existing network communications. Since Ethernet offered only best effort service without guarantees for an error free service, towards achieving reliable communication over it a hierarchy of layered communication protocols were implemented in Alto.&lt;br /&gt;
&lt;br /&gt;
Alto had the capability to act as a gateway connecting different networks together. Xerox had a “Xerox Internet” consisting of several hundred computers, 25 networks and 20 gateways providing internet service back in 1979. &lt;br /&gt;
&lt;br /&gt;
Ethernet communications system had two components – Ethernet controller and transceiver. Ethernet controller performed the encoding/decoding, buffering and micromachine interfacing functionalities whereas the transceiver deals with the transmission/reception of bits, which operated in half-duplex mode. &lt;br /&gt;
&lt;br /&gt;
One important different with respect to the design of the Ethernet controller task as opposed to the ones for display and disk were that there were no periodic events to wake this task up instead a S-group instruction was used to set a flip flop in Ethernet hardware which was used to wake up the Ethernet controller task. Also the Ethernet used interrupt based mechanism used to indicate completion since the packet reception/transmission happens asynchronously. Ethernet microcode implements a packet filtering mechanism which checks for the reception of (1) destined for the host (2) broadcast packets. It can also operate in a promiscuous mode with host address set to zero receiving all packets, which can be used for debugging purposes.&lt;br /&gt;
&lt;br /&gt;
Ethernet had no security mechanism built into it. Since Ethernet was a collision domain an exponential backoff algorithm was implemented towards avoiding collisions (which occurs when two Ethernet transmitters tries to use the ether at the same time).&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 1000 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today) stored as 8 bits in memory. They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;br /&gt;
&lt;br /&gt;
=== Emulation ===&lt;br /&gt;
A notable feature is that the Alto implemented a BCPL emulator in the PROM microstore. Other emulators were available, but they were loaded in RAM. BCPL was used as the main implementation language for the computer&#039;s applications. Very little assembly was used.&lt;br /&gt;
&lt;br /&gt;
=== Programming Environments ===&lt;br /&gt;
The Alto ran the gamut of available programming environment. A conventional toolchain implemented in BCPL was offered (ie. compiler, linker, debugger, file manager, etc.) Interactive programming environments such as Smalltalk and Interlisp were also available. These fell pray to the Alto&#039;s limited main memory (64k) and suffered crippling performance issues.&lt;br /&gt;
&lt;br /&gt;
The only standardized facilities for programming environments to use was the file system and communication protocols. All other hardware had to be accessed using custom methods.&lt;br /&gt;
&lt;br /&gt;
=== Personal Applications ===&lt;br /&gt;
Applications made use of the display, mouse, and keyboard. They were mostly involved with document production. For example, there was a text editor where the user could specify formatting and typefaces. The PC also helped facilitate and automate aspects of logic board design and assembling.&lt;br /&gt;
&lt;br /&gt;
=== Communication in applications ===&lt;br /&gt;
Most applications were designed with the assumption that the computer would exploit networked resources. For example, printing services would be handled by a printing server, file storage could be local or distributed. The Alto made use of existing services too. Its clock was set by a &#039;time of day&#039; service and it could be bootstrapped over ethernet.&lt;br /&gt;
&lt;br /&gt;
Communication in applications was also used in new and novel ways. For example, the debugger mentioned above was network aware. It could help programmers debug software remotely.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=19041</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=19041"/>
		<updated>2014-04-21T10:08:51Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANET to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purpose of early networks was resource sharing. Abstractions were used for transmission and message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
The AD-HOC routing was essentially TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The purposes envisioned for computer networks were:&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were computing resources (especially expensive mainframes) and data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
They envisioned a primitive layered architecture with dedicated routing functions. Some of the various topologies were:&lt;br /&gt;
* star&lt;br /&gt;
* loop&lt;br /&gt;
* bus&lt;br /&gt;
&lt;br /&gt;
It was also primarily (packet | message)-switched. Circuit-switching was too expensive and had large setup times and this didn&#039;t require committing resources. There was also primitive flow control and buffering, but no congestion control.&lt;br /&gt;
&lt;br /&gt;
This network architecture predated proper congestion control, such as Van Jacobsen&#039;s slow start. The routing was either Ad-hoc or based on something similar to RIP. They would anticipate elephants and have mice latency issues. Unlike the modern internet, there was error control and retransmission at every step.&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propagation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself, replaces the destination address by the source address&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared? What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision? Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architectures to the ones described in these papers.  &lt;br /&gt;
&lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear? &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19040</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19040"/>
		<updated>2014-04-21T10:04:03Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Early Web (Jan. 23)==&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current web is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching (distributed cache) that doesn&#039;t compromise anything&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism?&lt;br /&gt;
** Make information centric as opposed to host centric&lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
* “The web that wasn&#039;t” mentioned the moans of librarians.&lt;br /&gt;
* A universal classification system is needed.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
* More structured content, both classification, and organization&lt;br /&gt;
* Current indexing by crude brute-force searching for words, etc., rather than searching metadata&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* Tim Berner-Lees bemoaning the death of the semantic web.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It essentially now uses AI to determine relevancy, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* The core service/function of an operating system is to locate information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
Distributed key management is a holy grail no one has ever managed to get it working. Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days. The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19039</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19039"/>
		<updated>2014-04-21T10:02:44Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Early Web (Jan. 23)==&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current web is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching (distributed cache) that doesn&#039;t compromise anything&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism?&lt;br /&gt;
** Make information centric as opposed to host centric&lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
* “The web that wasn&#039;t” mentioned the moans of librarians.&lt;br /&gt;
* A universal classification system is needed.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
* More structured content, both classification, and organization&lt;br /&gt;
* Current indexing by crude brute-force searching for words, etc., rather than searching metadata&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* Tim Berner-Lees bemoaning the death of the semantic web.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It essentially now uses AI to determine relevancy, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* A core service of an operating system is locating information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
Distributed key management is a holy grail no one has ever managed to get it working. Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days. The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19038</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19038"/>
		<updated>2014-04-21T10:02:32Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Early Web (Jan. 23)==&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current web is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching (distributed cache) that doesn&#039;t compromise anything&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism?&lt;br /&gt;
** Make information centric as opposed to host centric.&lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
* “The web that wasn&#039;t” mentioned the moans of librarians.&lt;br /&gt;
* A universal classification system is needed.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
* More structured content, both classification, and organization&lt;br /&gt;
* Current indexing by crude brute-force searching for words, etc., rather than searching metadata&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* Tim Berner-Lees bemoaning the death of the semantic web.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It essentially now uses AI to determine relevancy, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* A core service of an operating system is locating information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
Distributed key management is a holy grail no one has ever managed to get it working. Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days. The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19037</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19037"/>
		<updated>2014-04-21T09:58:31Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Class discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Early Web (Jan. 23)==&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current web is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching - distributed cache&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism? &lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
* “The web that wasn&#039;t” mentioned the moans of librarians.&lt;br /&gt;
* A universal classification system is needed.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
* More structured content, both classification, and organization&lt;br /&gt;
* Current indexing by crude brute-force searching for words, etc., rather than searching metadata&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* Tim Berner-Lees bemoaning the death of the semantic web.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It essentially now uses AI to determine relevancy, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* A core service of an operating system is locating information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
Distributed key management is a holy grail no one has ever managed to get it working. Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days. The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19036</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=19036"/>
		<updated>2014-04-21T09:49:48Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Back to Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms (similar to Dynamo): [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  &lt;br /&gt;
&lt;br /&gt;
=== A brief look at Open Source ===&lt;br /&gt;
&lt;br /&gt;
Initialy Anil talked about google versus facebook approach to technologies. &lt;br /&gt;
* Google developed its technology internally and used for competitive advantage. &lt;br /&gt;
* Facebook developed its technology in open source manner. They needed to create an open source community to keep up.&lt;br /&gt;
* He talked little bit about licences. With GPL3 you have to provide code with binary. In AGPL additional service also be given with source code.&lt;br /&gt;
&lt;br /&gt;
While discussing Hbase versus Cassandra discussed why two projects with same notion are supported? Apache as a community. For any tool in CS, particularly software tools, its actually important to have more than one good implementation. Only time it doesn&#039;t happen because of market realities. &lt;br /&gt;
&lt;br /&gt;
Hadoop is a set of technologies that represent the open source equivalent of&lt;br /&gt;
Google&#039;s infrastructure&lt;br /&gt;
* Cassandra -&amp;gt; ???&lt;br /&gt;
* HBase -&amp;gt; BigTable&lt;br /&gt;
* HDFS -&amp;gt; GFS&lt;br /&gt;
* Zookeeper -&amp;gt; Chubby&lt;br /&gt;
&lt;br /&gt;
=== Back to Cassandra ===&lt;br /&gt;
&lt;br /&gt;
* Cassandra is basically you take a key value store system like Dynamo and then  you extend to look like BigTable.&lt;br /&gt;
* Not just a key value store. It is a multi dimensional map. You can look up  different columns, etc. The data is more structured than a Key-Value store.&lt;br /&gt;
* In a key value store, you can only look up the key. Cassandra is much richer  than this.&lt;br /&gt;
* A fundamental difference in Cassandra is that adding columns is trivial. &lt;br /&gt;
&lt;br /&gt;
Bigtable vs. Cassandra:&lt;br /&gt;
* Bigtable and Cassandra exposes similar APIs.&lt;br /&gt;
* Cassandra seems to be lighter weight.&lt;br /&gt;
* Bigtable depends on GFS. Cassandra depends on server&#039;s file system. Anil feels cassandra cluster is easy to setup. &lt;br /&gt;
* Bigtable is designed for stream oriented batch processing . Cassandra is for handling online/realtime/highspeed stuff.&lt;br /&gt;
&lt;br /&gt;
Schema design is explained in inbox example. It does not give clarity about how table will look like. Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
	&lt;br /&gt;
Apache Zookeeper is used for distributed configuration. It will also bootstrap and configure a new node. It is similar to Chubby. Zookeeper is for node level information. The Gossip protocol is more about key partitioning information and distributing that information amongst nodes. &lt;br /&gt;
&lt;br /&gt;
Cassandra uses a modified version of the Accrual Failure Detector. The idea of an Accrual Failure Detection is that failure detection module emits a value which represents a suspicion level for each of monitored nodes. The idea is to express the value of phi� on a scale that is dynamically adjusted to react network and load conditions at the monitored nodes.&lt;br /&gt;
&lt;br /&gt;
Files are written to disk in an sequential way and are never mutated. This way, reading a file does not require locks. Garbage collection takes care of deletion.&lt;br /&gt;
&lt;br /&gt;
Cassandra writes in an immutable way like functional programming. There is no assignment in functional programming. It tries to eliminate side effects. Data is just binded you associate a name with a value. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cassandra - &lt;br /&gt;
* Uses consistent hashing (like most DHTs)&lt;br /&gt;
* Lighter weight &lt;br /&gt;
* All most of the readings are part of Apache&lt;br /&gt;
* More designed for online updates for interactive lower latency &lt;br /&gt;
* Once they write to disk they only read back&lt;br /&gt;
* Scalable multi master database with no single point of failure&lt;br /&gt;
* Reason for not giving out the complete detail on the table schema&lt;br /&gt;
* Probably not just inbox search&lt;br /&gt;
* All data in one row of a table &lt;br /&gt;
* Its not a key-value store. Big blob of data. &lt;br /&gt;
* Gossip based protocol - Scuttlebutt. Every node is aware of overy other.&lt;br /&gt;
* Fixed circular ring &lt;br /&gt;
* Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data?&lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]] This was popular before operating systems were open source.&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Comet seeks to greatly expand the application space for key-value storage systems through application-specific customization.Comet storage object is a &amp;lt;key,value&amp;gt; pair.Each Comet node stores a collection of active storage objects (ASOs) that consist of a key, a value, and a set of handlers. Comet handlers run as a result of timers or storage operations, such as get or put, allowing an ASO to take dynamic, application-specific actions to customize its behaviour. Handlers are written in a simple sandboxed extension language, providing properties of safety and isolation.ASO can modify its environment, monitor its execution,and make dynamic decisions about its state.&lt;br /&gt;
&lt;br /&gt;
Researchers try to provide the ability to extend a DHT without requiring a substantial investment of effort to modify its implementation.They try to implement to isolation and safety using restricting system access,restricting resource consumption and restricting within-Comet communication.&lt;br /&gt;
&lt;br /&gt;
* Provids callbacks (aka. Database triggers)&lt;br /&gt;
* Provides DHT platform that is extensible at the application level&lt;br /&gt;
* Uses Lua&lt;br /&gt;
* Provided extensibility in an untrusted environment. Dynamo, by contrast, was extensible but only in a trusted environment.&lt;br /&gt;
* Why do we care? We don&#039;t really. Why would you want this extensibility? You wouldn&#039;t. It isn&#039;t worth the cost. Current systems currently have an allowance for tuneability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* if someone wants to understand the consistent hashing in detail, here is a blog which explains it really well, this blog has other great posts in the field of distributed system as well -&lt;br /&gt;
http://loveforprogramming.quora.com/Distributed-Systems-Part-1-A-peek-into-consistent-hashing *&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19021</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19021"/>
		<updated>2014-04-19T05:14:06Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* 7 Dwarfs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra (hard to parallelize)&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo (parallelizable)&lt;br /&gt;
&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19020</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19020"/>
		<updated>2014-04-19T05:13:28Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* 7 Dwarfs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
** Parallelizable&lt;br /&gt;
&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19019</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19019"/>
		<updated>2014-04-19T05:12:50Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Extended Dwarfs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
* In general you can model modern programs (Java, C, etc.) as finite state machines which are not parallizable&lt;br /&gt;
* Today we deal with processor limitations by using &amp;quot;experts&amp;quot; to build the system which results in a very specialized solution usually in the cloud&lt;br /&gt;
* Authors have found the problem but not really the process&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines (hardest to parallelize)&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15445</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15445"/>
		<updated>2011-12-07T01:35:04Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. The reason I chose GPeriodic as one of the programs to test was to see how many features it had, in order to get a feel of how in depth the programs provided by Poseidon are. GPeriodic proved to be full of information pertaining to each element, organized in an ordered and easily accessible manner. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a horribly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. My reason for choosing to test fityk was that it was one of the few remaining programs that seemed to be reasonably understandable without needing too much prerequisite knowledge.  After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.4&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab and getting a run level, Upstart is event-driven, meaning that certain processes start when a condition is met or triggered.&lt;br /&gt;
&lt;br /&gt;
Some major programs that are run include acpid.conf, cron.conf, dbus.conf, hostname.conf, hwclock.conf, network-manager.conf, and udev.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15444</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15444"/>
		<updated>2011-12-06T22:21:42Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. The reason I chose GPeriodic as one of the programs to test was to see how many features it had, in order to get a feel of how in depth the programs provided by Poseidon are. GPeriodic proved to be full of information pertaining to each element, organized in an ordered and easily accessible manner. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a horribly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. My reason for choosing to test fityk was that it was one of the few remaining programs that seemed to be reasonably understandable without needing too much prerequisite knowledge.  After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.4&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab and getting a run level, Upstart is event-driven, meaning that certain processes start when a condition is met or triggered.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15443</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15443"/>
		<updated>2011-12-06T21:23:47Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Basic Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. The reason I chose GPeriodic as one of the programs to test was to see how many features it had, in order to get a feel of how in depth the programs provided by Poseidon are. GPeriodic proved to be full of information pertaining to each element, organized in an ordered and easily accessible manner. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a horribly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. My reason for choosing to test fityk was that it was one of the few remaining programs that seemed to be reasonably understandable without needing too much prerequisite knowledge.  After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.4&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15442</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15442"/>
		<updated>2011-12-06T21:16:48Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Basic Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. The reason I chose GPeriodic as one of the programs to test was to see how many features it had, in order to get a feel of how in depth the programs provided by Poseidon are. GPeriodic proved to be full of information pertaining to each element, organized in an ordered and easily accessible manner. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.4&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15439</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15439"/>
		<updated>2011-12-06T20:54:17Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Major Package Version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.4&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15437</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15437"/>
		<updated>2011-12-06T20:16:04Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.1&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	        4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel calls Upstart, which is a replacement for the older System V init. This is because Poseidon Linux is based on Ubuntu, and Ubuntu has used Upstart since the release of Ubuntu 6.10.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15433</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15433"/>
		<updated>2011-12-06T19:48:16Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Basic Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]]&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;[[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.1&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	        4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel executes init, which is the last step of the boot procedure.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15430</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15430"/>
		<updated>2011-12-06T13:02:29Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]][[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.1&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	        4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel executes init, which is the last step of the boot procedure.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method. The default run level for Poseidon is 2. This value is set in etc/init/rc-sysinit.conf.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15429</id>
		<title>COMP 3000 2011 Report: PoseidonLinux</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_PoseidonLinux&amp;diff=15429"/>
		<updated>2011-12-06T12:55:56Z</updated>

		<summary type="html">&lt;p&gt;36chambers: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:PoseidonLogo2.png‎]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Part I=&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
The distribution is named Poseidon Linux, and was created by a team of Brazilian scientists, most of whom are oceanographers and marine biologists. The development team consists of five people, with contributions from quite a few others. &lt;br /&gt;
&lt;br /&gt;
The target audience for this distribution is the scientific community, and many of the programs that come pre-installed are intended for academic and scientific use. It includes many different specialized software that aren’t available in the Ubuntu\Debian repositories, and thereby provides a useful collection of  programs. The specialized programs pertain to subjects such as math and statistics, computer-aided design, multi-dimensional graphical visualization, chemistry, and bioinformatics. I obtained the operating system by downloading it from the Poseidon Linux homepage at: https://sites.google.com/site/poseidonlinux/download  &lt;br /&gt;
As of now, the most recent version of the operating system is Poseidon Linux 4.0, and though it is only available in 32 bits at the moment, there are promises of a 64 bit version in the future. An older version of the operating system, Poseidon Linux 3.2, is also available for download. The size of the image file for Poseidon Linux is around 3.7 GB, and when fully installed, requires at least 9.8 GB of hard drive space. Poseidon Linux was originally derived from Kurumin Linux, though Kurumin was officially discontinued on January 29, 2009, after which Poseidon became based on Ubuntu, with the first Ubuntu based release being Poseidon 3.0. Poseidon 3.0 was based on Ubuntu 8.04 LTS, whereas version 4.0 was based on Ubuntu 10.04.&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseidon_install_1.jpg|400px|right]][[File:Poseiden_install_2.jpg|300px|left]]&lt;br /&gt;
I installed Poseidon Linux using Virtual Box. When setting up the new virtual machine in VirtualBox, I allocated 4GB of ram, set the file type of the new virtual disk as .VDI, and set the virtual disk file to use dynamically allocated space. After creating the virtual machine and specifying its settings, I ran the machine, selected the image file containing Poseidon, and begain the installation (screenshots below). A minor issue I came across during installation was my hard drive not having enough space left for Poseidon to install. This problem was quickly fixed after I reluctantly deleted a few  legally downloaded movies on my hard drive to free up enough space for the installation. &lt;br /&gt;
&lt;br /&gt;
During the installation, the user is asked to input information, and thereby allowed to customize several things. These include the current date and time, as well as the time zone and location in which the user resides, the user&#039;s keyboard layout, and the user&#039;s preferred username and password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
￼&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
[[File:Poseiden_4.jpg|400px|left|howboutNO?]][[File:Poseiden_3.jpg|400px||right|Pictured: not a space simulator]]&lt;br /&gt;
Due to the subject specialization of most of the programs and the knowledge required to use them to their respective potentials as intended, I’m unable to adequately review the programs and give a detailed analysis of how useful or well made these programs are &#039;&#039;(see fig. 1)&#039;&#039;.  In retrospect, it was unwise to choose a distribution based on coolest sounding name rather than practical usage. Aside from the inherent complexity of many of the programs that came with Poseidon, many of the listed programs simply do not execute when selected. The most noticeable disappointment among the inactive/broken/not-yet-implemented programs was the OpenUniverse Space Simulator, which by merit of name alone was obviously intended to be the high point of Poseidon’s entire existence &#039;&#039;(see fig. 2)&#039;&#039;. Other non functioning programs include  Stellarium, presumably another failed space simulator, and PyMol Molecular Graphics System, a program listed under Bioinformatics. Aside from the programs targeting Poseidon’s main audience, there are programming IDEs  like Eclipse and QT Creator, audio/video/image editing programs like Audacity, Pitivi Video Editor, and GIMP, office applications that include an assortment of LibreOffice programs, and 3d graphics modeling programs like Blender. &lt;br /&gt;
&lt;br /&gt;
The first working program I tested  was GPeriodic, which is a periodic table of the elements that allows the user to select any element on the table to view more detailed information about it. Such a program is of obvious value to anyone who is using Poseidon for scientific purposes, particularly those doing chemistry. The second program I tested was called fityk, a data analysis program with a poorly chosen name. Since it claimed to do data analysis and had numbers everywhere, I figured it was legit. It also allows the user to select from a long list of function types, from quadratic to exponential decay. After this, the rest of the programs under Applications =&amp;gt; Poseidon become even more obscure and complicated, though of no fault of its own considering that Poseidon is advertised to have a rather specific purpose, geared toward the scientific community.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;text-align: left;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 1)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;text-align: right;&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;&#039;(Fig. 2)&#039;&#039;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
Given that several programs simply don’t work, it is obvious that Poseidon falls somewhat short of its design goals. These errors will presumably be fixed in future updates/versions, though for the time being, there are many other working programs that are quite unique and are assumed to be very useful to users who are looking for the programs designed for very specific uses in the areas of mathematics/statistics, chemistry, bioinformatics, and GIS/CAD (computer aided design). The non working programs  indicate that Poseidon is still in an unfinished state, which leaves a slightly disappointing impression, though aside from that, Poseidon has a nice visual presentation with an appealing default desktop background and a clean interface. The list of programs Poseidon features gives anyone who knows how to use them a wide variety of tools at their disposal.&lt;br /&gt;
&lt;br /&gt;
To Poseidon’s target audience who can look past superficial flaws like absence of promised space simulators and a few missing programs, Poseidon has enough math and science related programs to provide at least some level of usefulness towards anyone who would require such programs. Poseidon delivers on it’s promise as a distribution designed for academic and scientific use.&lt;br /&gt;
&lt;br /&gt;
=Part II=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
&lt;br /&gt;
[[File:Synaptic.jpg‎|400px|right]]&lt;br /&gt;
Poseidon’s packaging format is deb, which is an extension of the Debian software package format. Debian packages contain 2 gzipped, bzipped, or lzmaed tar archives, where one contains control information, while the other contains the actual data.&lt;br /&gt;
&lt;br /&gt;
The Synaptic Package Manager is a package management system that comes with Poseidon Linux. Synaptic offers a relatively easy to use graphical interface, with a clear layout that is simple to understand. The left panel is the package browser, which displays package categories, and the larger panel on the right displays the packages included in the highlighted package category. When a package is selected, the bottom panel displays a text description of the highlighted package. Also included is the version of each software that is currently installed on the system, as well as the newest version of that software available so that the user can keep track of how well updated his software is. &lt;br /&gt;
&lt;br /&gt;
You add packages in Synaptic by going to File, and then Add Downloaded Packages. Of course, if the package you want to add is not already on your system, you must first download it. To remove packages, right click on a package and select Mark for Removal/Complete Removal to check the box next to a package in the larger right panel, and then click the Apply icon on the top to carry out the removal. The software catalog for Poseidon is very extensible due to Synaptic, which allows the user to add any additional software packages, as well as remove any unwanted software packages. Synaptic’s inclusion of “installed version” and “latest version” information makes it even easier for the user to make updates to their software.&lt;br /&gt;
&lt;br /&gt;
Those who dislike using GUIs have the option of adding or removing packages from command line using a tool called apt-get. To remove a package, simply type &#039;&#039;&#039;&#039;&#039;apt-get remove [packagename]&#039;&#039;&#039;&#039;&#039;, and to add a package, type &#039;&#039;&#039;&#039;&#039;apt-get install [packagename]&#039;&#039;&#039;&#039;&#039;. Packages can even be reinstalled if an installed package has been damaged by using &#039;&#039;&#039;&#039;&#039;apt-get --reinstall install [packagename]&#039;&#039;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;   &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Major Package Version==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux kernel&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.6.32&lt;br /&gt;
*Latest Version:		3.1.1&lt;br /&gt;
*Up-to-Date: 			No, the kernel is not up to date.&lt;br /&gt;
*Modifications:			No apparent mods.&lt;br /&gt;
*Purpose:			Kernel is the main component of the operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libc6&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	2.11.1-0ubuntu7.8		(released Feb. 1st, 2011)&lt;br /&gt;
*Latest Version:	2.11.1-0ubuntu7.8	&lt;br /&gt;
*Up-to-Date:		This package is up to date. &lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose: 		It is necessary to have a C library.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bash&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	4.1-2ubuntu3			(released April 19th, 2010)&lt;br /&gt;
*Latest Version:	        4.1-2ubuntu3&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Allows user to input commands without using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firefox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	5.0+build1+nobinonly-0ubuntu0.10.04.1~mfs1	&lt;br /&gt;
*Latest Version:	8.0.1&lt;br /&gt;
*Up-to-Date: 		This package is not up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Web browser gives the user access to the world wide web.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Thunderbird&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1	(released April 24th, 2011)&lt;br /&gt;
*Latest Version:	3.1.10+build1+nobinonly-0ubuntu0.10.04.1&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods.&lt;br /&gt;
*Purpose:		Email client that also has RSS feeds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;gtk2-engines-pixbuf&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Latest Version:		2.20.1-0ubuntu2&lt;br /&gt;
*Up-to-Date: 			This package is up to date&lt;br /&gt;
*Modifications:			This package contains the pixbuf theme engine.&lt;br /&gt;
*Purpose:			Gtk+ is a multi-platform toolkit for constructing GUIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;dpkg&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Latest Version:	1.15.5.6ubuntu4.5&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Handles both the installation and removal of Debian software packages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;busybox&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	1:1.13.3-1ubuntu11&lt;br /&gt;
*Latest Version:	1:1.13.3-1ubuntu11	&lt;br /&gt;
*Up-to-Date: 		This package is up to date.&lt;br /&gt;
*Modifications:		No apparent mods. &lt;br /&gt;
*Purpose:		Combines small versions of many common UNIX utilities into a single executable file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;libreoffice&#039;&#039;&#039;&lt;br /&gt;
*Installed Version:	3.13.2&lt;br /&gt;
*Latest Version:	3.4.4&lt;br /&gt;
*Up-to-Date: 		This package is out of date. &lt;br /&gt;
*Modifications:		The option exists to extend the functionality of LibreOffice by installing additional packages.&lt;br /&gt;
*Purpose:		A free alternative to Microsoft Office, useful and relevant considering Poseidon is intended for specific math/science requirements.&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux follows the usual Linux startup, where the BIOS is the first thing running. The BIOS (basic input/output system) checks to see if the hardware and peripherals of the computer are functioning together, and also loads the Master Boot Record, which is the first 512 bytes of a data storage device that stores the boot record of an operating system. The MBR then starts the boot loader, which is a program that loads the operating system. In the case of Poseidon Linux, the boot loader is GRUB. GRUB then loads the operating system. Once it is loaded, the kernel executes init, which is the last step of the boot procedure.&lt;br /&gt;
&lt;br /&gt;
Poseidon Linux is based on Ubuntu, so instead of using /etc/inittab, it gets the run level through another method.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1)	&amp;quot;Poseidon Linux.&amp;quot; Wikipedia, the Free Encyclopedia. Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
2)	&amp;quot;DistroWatch.com: Poseidon Linux.&amp;quot; DistroWatch.com: Put the Fun Back into Computing. Use Linux, BSD. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
3)	&amp;quot;Https://sites.google.com/site/poseidonlinux/.&amp;quot; Web. 20 Oct. 2011. &lt;br /&gt;
&lt;br /&gt;
4)	&amp;quot;First Look at Poseidon Linux, the Linux For Scientists | Linux.com.&amp;quot; Linux.com | The Source for Linux Information. Web. 20 Oct. 2011.&lt;br /&gt;
&lt;br /&gt;
5)      &amp;quot;Deb (file Format).&amp;quot; Wikipedia, the Free Encyclopedia. Web. 16 Nov. 2011. &amp;lt;http://en.wikipedia.org/wiki/Deb_(file_format)&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
6)       EasyBib: Free Bibliography Maker - MLA, APA, Chicago Citation Styles. Web. 20 Oct. 2011.&lt;/div&gt;</summary>
		<author><name>36chambers</name></author>
	</entry>
</feed>