<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=DistOS_2021F_2021-11-09</id>
	<title>DistOS 2021F 2021-11-09 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=DistOS_2021F_2021-11-09"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2021F_2021-11-09&amp;action=history"/>
	<updated>2026-05-12T18:52:28Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2021F_2021-11-09&amp;diff=23449&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;==Notes==  &lt;pre&gt; Lecture 15 ----------  - experiences  - proposal  - midterm update  - participation  Spanner  - a big, distributed (semi-)relational database    - very consis...&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2021F_2021-11-09&amp;diff=23449&amp;oldid=prev"/>
		<updated>2021-11-11T21:48:02Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Notes==  &amp;lt;pre&amp;gt; Lecture 15 ----------  - experiences  - proposal  - midterm update  - participation  Spanner  - a big, distributed (semi-)relational database    - very consis...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Lecture 15&lt;br /&gt;
----------&lt;br /&gt;
 - experiences&lt;br /&gt;
 - proposal&lt;br /&gt;
 - midterm update&lt;br /&gt;
 - participation&lt;br /&gt;
&lt;br /&gt;
Spanner&lt;br /&gt;
 - a big, distributed (semi-)relational database&lt;br /&gt;
   - very consistent&lt;br /&gt;
 - supported SQL&lt;br /&gt;
   - all of the query parts&lt;br /&gt;
   - management, maybe not so much?&lt;br /&gt;
 - big deal, because of usability&lt;br /&gt;
   - developers know SQL&lt;br /&gt;
   - want transactions, helpful for consistency&lt;br /&gt;
     across tables&lt;br /&gt;
&lt;br /&gt;
in distributed systems, we&amp;#039;re always making tradeoffs&lt;br /&gt;
between functionality, scalability, and complexity&lt;br /&gt;
 - normally we just think about functionality vs scalability&lt;br /&gt;
   (SQL vs NoSQL)&lt;br /&gt;
 - but add complexity and you can get functionality &amp;amp; scalability at the same time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Spanner is proprietary to Google, others have&lt;br /&gt;
made their own versions (CockroachDB)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Tradeoff also shows up in Tensorflow&lt;br /&gt;
 - for &amp;quot;machine learning&amp;quot;&lt;br /&gt;
 - what is it really for?&lt;br /&gt;
   - working with n-dimensional arrays (i.e. tensors)&lt;br /&gt;
 - and we can do neural networks if we can do fast&lt;br /&gt;
   tensor processing&lt;br /&gt;
&lt;br /&gt;
Is this just the same thing as MapReduce?&lt;br /&gt;
 - what&amp;#039;s different?&lt;br /&gt;
   - not embarassingly parallel!&lt;br /&gt;
   - have to communicate between tasks as they run,&lt;br /&gt;
     not just at the end (i.e., during reduce)&lt;br /&gt;
&lt;br /&gt;
Modern machine learning is based on large, mutable models&lt;br /&gt;
 - MANY parameters (weights in the neural network)&lt;br /&gt;
&lt;br /&gt;
Basic idea of a neural network&lt;br /&gt;
 - input, hidden, and output nodes&lt;br /&gt;
 - input nodes are connected to layers of hidden nodes&lt;br /&gt;
 - hidden nodes are connected to output nodes&lt;br /&gt;
 - weights on connections between nodes determine&lt;br /&gt;
   how values are transformed as they propagate along connections between nodes&lt;br /&gt;
&lt;br /&gt;
So here, take an input tensor, transform it a bunch of times until you get an output tensor&lt;br /&gt;
&lt;br /&gt;
All that &amp;quot;deep&amp;quot; learning means is it is a neural network&lt;br /&gt;
with many, many layers of hidden nodes&lt;br /&gt;
&lt;br /&gt;
The cool part about tensorflow is you don&amp;#039;t care about the hardware&lt;br /&gt;
 - your data model can be efficiently mapped onto a wide&lt;br /&gt;
   variety of architectures&lt;br /&gt;
 - big change from past efforts in supercomputing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>