WebFund 2024F Lecture 18
Video
Video is still processing
Video from the lecture for November 19, 2024 is now available:
Notes
Lecture 18 ---------- Lecture will start at 12:35, will go to 1:55 PM (one hour late, as announced on Teams) - midterm marks will be out by next class, hopefully later today - next class I will discuss interviews, announce process (also on Teams) Assignment 3 deadline is tomorrow officially, but accepted until Thursday 11:30 AM - would you all like a bit more time? - will extend to Saturday night Please make sure A3 passes the validator! A4 is due last day of class, will be coming out this weekend. What remains? ------------- sessions - tracked with cookies - per user session (not per user) - normally expire - server maintains state associated with cookie, so normally keeps them in a database - alternatively, cookies are stored encrypted, so client cannot parse data http local storage - like cookies, but allowing for megabytes+ rather than kilobytes - but similarly cannot be counted on by the server, client can delete at any time - newer standard, so less supported (but pretty common now) - not sent back to the server on every request, just available for JS code to access on the client - useful for offline access (e.g., google docs) virtual DOMs - DOM is the Document Object Model, the JS-accessible representation of an HTML page - generated for every HTML document a browser loads that has JS - this is the data structure that is accessed and modified when JS code wants to change what is on the page - a virtual DOM is a server-side DOM-like data structure that mirrors the DOM on the client - the idea of a virtual DOM is to allow the server to send partial updates to the client cleanly, without the programmer having to think about DOM manipulation - changes are "declarative" rather than "procedural" - really, it just means that - developers makes the page they want on the server - the virtual DOM figures out what info must be sent and how the DOM must be changed (and what operations to do) in order to keep the server-side and client-side pages in sync A modern trend now is to serve fully-formed pages (with the dynamic parts filled in) as static documents - no need for JS to run on the client to get a proper view - but client-side JS can do incremental updates to make things dynamic Note that all of the above doesn't involve new mechanisms, it is all built on the technologies you already have seen in this class. WASM - web assembly - "assembly language for the web" meaning arbitrary languages can be compiled to web assembly and then run in a browser - still need JavaScript glue code to access the DOM - but otherwise your code can be written in C, C++, Rust - dynamic languages can also be ported, WASM now supports a garbage collector - this is relatively new - JavaScript runtimes have AMAZING garbage collectors (super optimized) - without GC support, porting python, ruby, or other dynamic languages to WASM would require also including an entire garbage collector, which would inflate code size and be redundant WASM + JS is getting so good, it may be the main execution platform going forward - native code may mostly go away for many applications - big holdouts: mobile apps, embedded apps Garbage collection - any language where you don't do manual memory management (e.g., alloc/free) you'll likely have a garbage collector - exception: Swift/Objective-C on iOS/macOS with ARC (automatic reference counting) - all it is doing is keeping track of what data structures can and cannot be accessed: var a = [1,2,3,4,5]; var b = a; a = [6,7,8]; b = a; After this code, a and b are equal and are both referring to [6,7,8] - what happened to [1,2,3,4,5]? It became garbage - an allocated data structure which can no longer be accessed A garbage collector finds and de-allocates garbage automatically - in a language with manual memory management, the above code would result in a memory leak Java, Python, & JavaScript all generate a lot of garbage - Python's garbage collection is pretty bad - python itself is not fast at all - but it can call native libraries which are very fast (this is how AI is done efficiently in Python) - Java & JavaScript's garbage collectors are very good - not due to the languages, but due to lots of engineering "native" code means machine code that runs directy on the CPU - e.g., ARM, x86-64, RISC-V When you compile code in C, C++, or Rust, it is compiled into native machine code - so make sure you compile it for the CPU you will run it on! You can also compile these to WASM, and then it can run on any system that has a WASM runtime - but will run slower than native code, because it has to be translated - (not strictly true, JIT compilation can make WASM, Java bytecodes very fast) - look up HP's dynamo if you are curious https://dl.acm.org/doi/pdf/10.1145/349299.349303 - they sped up machine code by running it through a just-in-time compiler JavaScript language features - closures (well, covered partially) - objects, prototype-based inheritance TypeScript - Deno supports it natively, browsers do not however (compile Typescript-JS) - it is just JavaScript + types - not clear if it improves performance - but it can catch type-based bugs - compile-time errors vs runtime errors With this, you have basic knowledge of the fundamental technologies of the web. HOWEVER - there's a bit more to HTML - there's A LOT to CSS - there is A LOT of Javascript code out there that people use! - both client and server side! - there is A LOT of server-side technologies! - Java, C#, Ruby, Node - many, many frameworks and libraries built on these So when people talk about the complexity of web development, they really are talking about all the tech that's been built on top of web standards - the code web standards are pretty simple - much of the other stuff in web standards can be mostly ignored (some is just obsolete) example: XMLHttpRequest vs fetch - remember that the web is an *evolving* platform - so backwards compatibility is essential and a big burden - but this burden is mostly carried by web browsers not web servers, apps As a web developer, you mainly care about the legacy of web tech when dealing with a variety of web clients - many browsers running on many platforms - but much less fragmented than in the past, and getting better - if you look at MDM for any JavaScript or web API, it will tell you compatibility, and it mostly says "widely available" However, write once debug everywhere is still a thing - just because it runs in Chrome doesn't guarantee it will run on Firefox - unfortunately, most browsers today are Chrome-like underneath There are only 3 widely used web rendering engines - Gecko (Firefox) - WebKit (Safari) - Blink (Chrome, Edge, Opera, Brave) But there is hope! Anybody heard of the Ladybird browser? - cool open source project building a new browser from scratch - was originally part of the Serenity operating system, but now is independent - if you want to contribute to cool projects potentially check them out React - a "library for web and native user interfaces" - developed by facebook/meta - it is a very capable tool, but is not complete - need to combine it with other tech to get a full web "dev stack" - key features - declarative specification of dynamic interfaces (idea borrowed by Apple for SwiftUI) - so you specify relationships between components, the library takes care of the mechanics of their interaction - without something like react, you have to implement code for every little bit of dynamic behavior - gets complicated with there's a lot of state to manage - state is the hard problem in most user interfaces, including the web So a developer just has to manage their internal app state - React will then update the interface as needed when the state changes Consider a counter - old way: check an internal counter for changes and then update the counter displayed to the user - declarative way: maintain counter (library makes sure interface reflects the counter's value) In an MVC model, with a declarative interface you just manage the model and declare the views - the controller part is taken care of automagically React took off in part because of React Native - do react for native apps - i.e., you write a web app and you get a native app "for free" - (doesn't work completely in practice but can help with some cross- platform apps) - real use case: web app + mobile apps (iOS, Android) - if you aren't facebook/meta, may be better to just maintain all three apps, hard to get good results otherwise Web development's tech isn't that complex at its core, but it gets very complex in practice because the underlying problem is hard - how do I make a good distributed app when the UI (on the client) is separated from the core functionality (on the server) - high latency, low reliability connection makes good results hard - complex solutions can mitigate both issues, but at the cost of complexity