Mobile App Development 2021W Lecture 1

From Soma-notes
Jump to navigation Jump to search


Video from the lecture given on January 11, 2021 is now available.


Mobile Application Development
Lecture 1

Course outline
 - grading
 - communication
 - collaboration
 - accommodation
 - software

 - why this course doesn't make sense
 - why it does make sense

 - working with large software systems
   - managing ignorance
   - mental models & experimentation
 - design rationale, not syntax
 - concurrency, asynchronous programming
   - event-driven programming
   - declarative programming
 - security constraints
 - Xcode           <--- only runs on MacOS
 - Android Studio  <--- cross-platform

you don't *need* your own device, but it can be fun!

Programming languages for MAD

 - there are many languages
   - Swift
     - Legacy: Objective-C
   - Kotlin
     - Legacy: Java*

We're starting with Swift and iOS, then will cover Android

iOS is further along on the "modern" path
 - SwiftUI

Why do I say these languages are more "modern"?
 - not memory management
   (we'll talk about that)

It is all about types
 - especially type inference

Normally with types we have
 - static <- C, Java
 - dynamic <- Python, JavaScript
   - references have no type, only data does
   - a reference (variable) can refer to data of any
With static types, the compiler can check function/method arguments
  - but at the cost of extra syntax, mental overhead

With dynamic types, the runtime or app code checks types
  - so you get runtime errors that would have been
    detected by a compiler
  - but, you don't have to spend as much time thinking/
    declaring types

Swift, Kotlin do type inference
 - statically typed
 - but, compiler infers type based on context where
   it can
 - if type cannot be inferred and isn't specificed,
   you get a compiler error
     - no runtime type errors

   double f = 32.0;  <-- C-like
   var f = 32.0;     <-- JavaScript-like
   f = "Hello";  <-- legal in JavaScript, illegal in C

   var f = 32.0      <-- Swift knows that f is a double

   f = "Hello"   <-- compiler error

also used for object-oriented programming

Big advantage: removes non-semantic text from code,
  makes code "clearer" when determining purpose

Big disadvantage: adds magic, because while the
  compiler knows the types, the developer may not!

Type inference comes from functional programming
  ML was a pioneer (I think)

Programming paradigms
 - procedural   <-- C functions, data
 - object oriented  <-- Java, C++, objects that combine
                        functions and data, inheritance
 - functional
   - no state, only bindings
   - based on *math*
   - changes i = i+1
   - ML, Haskell, F#, 

 - declarative <-- Prolog, automated theorem proving
   - give rules, knowledge, constraints
   - system figures out how to combine/use them

Temperature converter
 - input a string, convert it to a float
 - but when I converted it, I got a "float?"
   - could be a float, could be null
   - using a null is bad, will cause program to crash
   - so compiler doesn't allow you to use a "float?",
     you have to check its value to convert it to a