Mobile App Development 2022W Lecture 6

From Soma-notes
Revision as of 20:07, 28 January 2022 by Soma (talk | contribs) (Created page with "==Video== Video from the lecture given on January 28, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/mad-2022w/lectures/comp1601-2022w-lec06-20220128.m4v...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

Video from the lecture given on January 28, 2022 is now available:

Video is also available through Brightspace (Resources->Zoom Meetings (Recordings, etc.)->Cloud Recordings tab). Note that here you'll also see chat messages.

Notes

Lecture 6
---------
Assignment 1 due Feb 3rd
  - accepted until Feb. 15th, solutions discussed in class on the 16th
Assignment 2 due Feb 18th
  - but accepted until Feb 22nd, no penalties
  - A2 solutions posted on Feb 23rd, discussed in class on March 2nd

You can submit multiple times
 - only the last submission is kept

A2 will be posted after T4 is posted
 - so by the end of next week

When completing assignments and exams:
 - download the template
 - rename template, replacing "template" with your MyCarletonOne username
 - fill in the top of the template
   - make sure to use the correct Student ID number!
     otherwise your grades may not be properly recorded
 - put answers in the template using a text editor
   - make sure to keep line endings as LF (not CR/LF)
   - you can use a word processor, but then be sure to save
     it as a text file (but beware, Word can insert weird characters
     into text files)
   - when you cite outside work, just add the info to the end of your answer
     (references should be per question)
     It can just be a link, but say what you got from each source
 - run your answers through the validator
   - make sure it reports "PASSED" at the top and
     all your answers were properly split up
 - if it passes, upload to brightspace

Why do this?
 - because this way we can use a script to split up answers
 - thus we grade all of question 1, then all of question 2, etc
   - all with no names
 - makes grading fairer and faster

LF is linefeed character, CR is carriage return

Text files can use these in multiple ways to indicate end of line
 - LF is for UNIX-like systems (including modern MacOS)
 - CR is for old Macs (MacOS 9 and before)
 - CR/LF is for DOS/Windows text files

Note that the script looks for questions by looking for lines that start with a letter/number combination followed by a period.  So don't start any lines in your answers with such a combination or it won't pass validation
 - just add a space before a letter/number/period combination if you really
   need it in your answer


Tutorial 3 - let's talk about gestures

In order to understand user events, we have to make sure we have a solid mental model of how things work in SwiftUI wrt user input

When a user touches the screen, presses a key, what happens?
Specifically, what code of yours runs?
 - to a first approximation, your code does nothing at all, it is
   all handled for you

You can choose to handle specific events, but they are mostly high level events
 - something happened and you then respond

Notice what is happening with a TextField
 - we tell it what the initial text is and we give it a state variable
 - our code doesn't see individual keystrokes *at all*
   - indeed, it doesn't directly see any of the keyboard input
 - so we aren't doing anything like "readline", "scanf" etc
   - that all happens behind the scenes

Instead, what happens is this
 - we associate a state variable with the TextField
 - when that state variable changes, SwiftUI refreshes all affected views
    - by running our code defining how those views should look

In textanalyzer-1, only one view depends upon t (other than the TextField itself), the Text view on line 16 that reports the results of the analysis
  - this Text view is redrawn whenever t changes, automatically

In classic GUI frameworks, things are very different
 - we register an event handler for user input
 - that handler processes the input, updates the app's state,
   and then updates the user interface as needed (manually)
    - many developers just create a "refresh entire screen"
      after user input, but that can be slow and can lead to
      the interface flickering
    - SwiftUI does this, but it does so automatically and efficiently

Observe how we do animation then
 - we can just update state variables, and the associated things displayed
   will automatically change
 - that's how the circle moves

So when we handle gestures, all we do is register functions/closures
 that will be called when user input events happen
   - and all those functions do is update state variables

Note that touch events are defined using high-level abstractions
 - because processing user events manually is painful
 - and also, that sort of state management doesn't fit in well with SwiftUI