Mobile App Development 2022W Lecture 20

From Soma-notes

Video

Video from the lecture given on March 30, 2022 is now available:

Video is also available through Brightspace (Resources->Zoom Meetings (Recordings, etc.)->Cloud Recordings tab). Note that here you'll also see chat messages.

Notes

Lecture 20
----------

Lanugage runtimes



Operating System (P)  |   Applications
--------------------------------------
Hardware (CPU/RAM/Devices)

P = runs with full privileges

Applications written is languages like C, C++ run directly on
the hardware
 - compile to machine code specific to the CPU of the system

So, like the operating system, applications in C and C++ run
"directly" on the hardware
  - CPU runs in a special mode that limits access to hardware
    (user mode)
  - when running the OS, CPU runs in supervisor mode (full access)


But...most languages we program in today don't work like this


                    |  bytecode or source code
OS (P)|  Native App |  Language runtime
----------------------------------------
Hardware (CPU/RAM/Devices)

Java: compiled to bytecode, run (JIT compiled & run)
      in a Java Virtual Machine
Python: compiled to bytecode or run directly in python VM
JavaScript: just-in-time compiled to machine code by JavaScript VM

Traditionally, we had either compiled or interpreted languages
 - compiled ones were converted to machine code once,
   machine code was then run
 - interpreted were "interpreted" line by line, running appropriate
   code as needed

Interpreted is much much slower, but there's no slow compile step
  - allows for fast development
  - also can require modest resources (RAM-wise)

bytecode: compiled from source, but not machine code
  - easier to process
  
code optimization: make code run faster/take up less space

But nowadays, we have just-in-time compiled languages
 - source or bytecode => machine code as needed

In fact, code may be compiled multiple times
 - fast compile to get things running quickly,
   but code isn't optimized
 - slower compile for frequently run code
   (takes longer to optimize code, but
    result runs faster)

The simplest translation of source to machine code is mostly looking
things up in a table
 - A => B
 - but this is far from the most efficient way of doing things

Most sophisticated JIT runtimes are now for JavaScript
 - borrow heavily from Java runtimes (JVM's)
 - which borrowed heavily from LISP runtimes

JavaScript runtimes got so good they ended up on the server
 - node.js, Deno

Note that PHP & Python also use some amount to just-in-time
compilation
 - but in general language runtimes are slower than JavaScript ones

The speed difference is mostly due to effort spent in making a better runtime
 - JIT runtimes are very complex
   - and depend heavily on language semantics

Python is notorious for having a slow runtime (CPython)
 - but there's alternatives like PyPy (that aren't completely compatible)

So how can Python be used for machine learning?
 - they call libraries written in C/C++  (e.g. NumPy)
   - and they do most of the work

Julia is an alternative to Python with a very fast runtime
 - but can be slow to start up

For complex apps, dev time in higher level languages is much less,
and resulting code can be comparably fast, depending on the app

If you look up comparisons of programming languages, you'll
see that C and C++ are near the top, but other languages show up
 - Fortran
 - Julia
 - Lua

And other languages can be surprisingly competitive, depending on the runtime

Basically, avoid using C and C++ unless you have to
 - but choose the right language for the task
 - consider performance, but in practice other factors are more important generally (security, platform integration, libraries)