<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Mobile_App_Development_2022W_Lecture_6</id>
	<title>Mobile App Development 2022W Lecture 6 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Mobile_App_Development_2022W_Lecture_6"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Mobile_App_Development_2022W_Lecture_6&amp;action=history"/>
	<updated>2026-04-06T01:52:59Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Mobile_App_Development_2022W_Lecture_6&amp;diff=23748&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;==Video==  Video from the lecture given on January 28, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/mad-2022w/lectures/comp1601-2022w-lec06-20220128.m4v...&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Mobile_App_Development_2022W_Lecture_6&amp;diff=23748&amp;oldid=prev"/>
		<updated>2022-01-28T20:07:49Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Video==  Video from the lecture given on January 28, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/mad-2022w/lectures/comp1601-2022w-lec06-20220128.m4v...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Video==&lt;br /&gt;
&lt;br /&gt;
Video from the lecture given on January 28, 2022 is now available:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/mad-2022w/lectures/comp1601-2022w-lec06-20220128.m4v video]&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/mad-2022w/lectures/comp1601-2022w-lec06-20220128.cc.vtt auto-generated captions]&lt;br /&gt;
Video is also available through Brightspace (Resources-&amp;gt;Zoom Meetings (Recordings, etc.)-&amp;gt;Cloud Recordings tab).  Note that here you&amp;#039;ll also see chat messages.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Lecture 6&lt;br /&gt;
---------&lt;br /&gt;
Assignment 1 due Feb 3rd&lt;br /&gt;
  - accepted until Feb. 15th, solutions discussed in class on the 16th&lt;br /&gt;
Assignment 2 due Feb 18th&lt;br /&gt;
  - but accepted until Feb 22nd, no penalties&lt;br /&gt;
  - A2 solutions posted on Feb 23rd, discussed in class on March 2nd&lt;br /&gt;
&lt;br /&gt;
You can submit multiple times&lt;br /&gt;
 - only the last submission is kept&lt;br /&gt;
&lt;br /&gt;
A2 will be posted after T4 is posted&lt;br /&gt;
 - so by the end of next week&lt;br /&gt;
&lt;br /&gt;
When completing assignments and exams:&lt;br /&gt;
 - download the template&lt;br /&gt;
 - rename template, replacing &amp;quot;template&amp;quot; with your MyCarletonOne username&lt;br /&gt;
 - fill in the top of the template&lt;br /&gt;
   - make sure to use the correct Student ID number!&lt;br /&gt;
     otherwise your grades may not be properly recorded&lt;br /&gt;
 - put answers in the template using a text editor&lt;br /&gt;
   - make sure to keep line endings as LF (not CR/LF)&lt;br /&gt;
   - you can use a word processor, but then be sure to save&lt;br /&gt;
     it as a text file (but beware, Word can insert weird characters&lt;br /&gt;
     into text files)&lt;br /&gt;
   - when you cite outside work, just add the info to the end of your answer&lt;br /&gt;
     (references should be per question)&lt;br /&gt;
     It can just be a link, but say what you got from each source&lt;br /&gt;
 - run your answers through the validator&lt;br /&gt;
   - make sure it reports &amp;quot;PASSED&amp;quot; at the top and&lt;br /&gt;
     all your answers were properly split up&lt;br /&gt;
 - if it passes, upload to brightspace&lt;br /&gt;
&lt;br /&gt;
Why do this?&lt;br /&gt;
 - because this way we can use a script to split up answers&lt;br /&gt;
 - thus we grade all of question 1, then all of question 2, etc&lt;br /&gt;
   - all with no names&lt;br /&gt;
 - makes grading fairer and faster&lt;br /&gt;
&lt;br /&gt;
LF is linefeed character, CR is carriage return&lt;br /&gt;
&lt;br /&gt;
Text files can use these in multiple ways to indicate end of line&lt;br /&gt;
 - LF is for UNIX-like systems (including modern MacOS)&lt;br /&gt;
 - CR is for old Macs (MacOS 9 and before)&lt;br /&gt;
 - CR/LF is for DOS/Windows text files&lt;br /&gt;
&lt;br /&gt;
Note that the script looks for questions by looking for lines that start with a letter/number combination followed by a period.  So don&amp;#039;t start any lines in your answers with such a combination or it won&amp;#039;t pass validation&lt;br /&gt;
 - just add a space before a letter/number/period combination if you really&lt;br /&gt;
   need it in your answer&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Tutorial 3 - let&amp;#039;s talk about gestures&lt;br /&gt;
&lt;br /&gt;
In order to understand user events, we have to make sure we have a solid mental model of how things work in SwiftUI wrt user input&lt;br /&gt;
&lt;br /&gt;
When a user touches the screen, presses a key, what happens?&lt;br /&gt;
Specifically, what code of yours runs?&lt;br /&gt;
 - to a first approximation, your code does nothing at all, it is&lt;br /&gt;
   all handled for you&lt;br /&gt;
&lt;br /&gt;
You can choose to handle specific events, but they are mostly high level events&lt;br /&gt;
 - something happened and you then respond&lt;br /&gt;
&lt;br /&gt;
Notice what is happening with a TextField&lt;br /&gt;
 - we tell it what the initial text is and we give it a state variable&lt;br /&gt;
 - our code doesn&amp;#039;t see individual keystrokes *at all*&lt;br /&gt;
   - indeed, it doesn&amp;#039;t directly see any of the keyboard input&lt;br /&gt;
 - so we aren&amp;#039;t doing anything like &amp;quot;readline&amp;quot;, &amp;quot;scanf&amp;quot; etc&lt;br /&gt;
   - that all happens behind the scenes&lt;br /&gt;
&lt;br /&gt;
Instead, what happens is this&lt;br /&gt;
 - we associate a state variable with the TextField&lt;br /&gt;
 - when that state variable changes, SwiftUI refreshes all affected views&lt;br /&gt;
    - by running our code defining how those views should look&lt;br /&gt;
&lt;br /&gt;
In textanalyzer-1, only one view depends upon t (other than the TextField itself), the Text view on line 16 that reports the results of the analysis&lt;br /&gt;
  - this Text view is redrawn whenever t changes, automatically&lt;br /&gt;
&lt;br /&gt;
In classic GUI frameworks, things are very different&lt;br /&gt;
 - we register an event handler for user input&lt;br /&gt;
 - that handler processes the input, updates the app&amp;#039;s state,&lt;br /&gt;
   and then updates the user interface as needed (manually)&lt;br /&gt;
    - many developers just create a &amp;quot;refresh entire screen&amp;quot;&lt;br /&gt;
      after user input, but that can be slow and can lead to&lt;br /&gt;
      the interface flickering&lt;br /&gt;
    - SwiftUI does this, but it does so automatically and efficiently&lt;br /&gt;
&lt;br /&gt;
Observe how we do animation then&lt;br /&gt;
 - we can just update state variables, and the associated things displayed&lt;br /&gt;
   will automatically change&lt;br /&gt;
 - that&amp;#039;s how the circle moves&lt;br /&gt;
&lt;br /&gt;
So when we handle gestures, all we do is register functions/closures&lt;br /&gt;
 that will be called when user input events happen&lt;br /&gt;
   - and all those functions do is update state variables&lt;br /&gt;
&lt;br /&gt;
Note that touch events are defined using high-level abstractions&lt;br /&gt;
 - because processing user events manually is painful&lt;br /&gt;
 - and also, that sort of state management doesn&amp;#039;t fit in well with SwiftUI&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>