DistOS 2023W 2023-03-06: Difference between revisions

From Soma-notes
Created page with "==Notes== <pre> NASD & Tapestry --------------- What problem(s) is NASD designed to solve? What's the standard architecture for a file server? - you have a server with attached storage - that server reads storage then sends what it reads over the network to a client Limitations of this approach - bandwidth bottleneck on the server: reading lots of disks in parallel only to copy to memory then send over the network puts huge strain on server memory system - also l..."
 
 
Line 31: Line 31:


Big question: do you think we use these kinds of architectures so much nowadays?  Why or why not?
Big question: do you think we use these kinds of architectures so much nowadays?  Why or why not?
AFTER GROUP DISCUSSIONS
How does regular network routing work (at a high level)?
First, you have LANs
- broadcast domains for figuring out mapping of IP<->MAC address, ARP
  - send out a message saying "who has X.X.X.X?" and someone replies "I do, you can reach me at this address (specific to WiFi, Ethernet, etc)
- between broadcast domains, we have routing tables
  - some hosts are routers, meaning that they know how to reach
    hosts beyond the LAN
  - routers have spanning trees that map IP addresses to destinations
      - for a home router there isn't much to it, it just forwards all packets not destined for the LAN to another gateway
      - but once you get into the core of the Internet, routers have multiple interfaces and muliple ways to forward packets, and routing tables are the instructions for how to do this forwarding
      - protocols like BGP are for updating routing tables
This is all based on IP addresses, and those are uniquely associated with a host (except for NAT)
When we talk about overlay networks, we want to route based on criteria other than IP address (i.e., beyond host-based identification)
Often we want to do content-based routing, i.e., exchange information without caring who has that information, let the network figure it out
- kind of like how when we communicate with an IP address we don't think about the path data has to flow to get to that IP address
Why are overlay networks, DHT's important for distributed operating systems?
- we often want to route information correctly even when hosts aren't working
- with an overlay network, hosts can have responsibilities that can be shifted if they don't respond, or responsibility can be distributed so failures don't matter (3 hosts have the info, if two are down one can still respond)
back to NASD, I said the point of the architecture is dealing with bandwidth constraints, but can't we solve that another way?
- seems really complicated right?
Classic file server:
  - client requests data from server
  - server returns data from client
but with NASD-type systems
  - client requests info on data from metadata server (low bandwidth)
  - server tells client where to find the data
  - client communicates with data servers to get data (high bandwidth)
Why can't we just have lots of metadata servers too?  Why have fewer?
  - because metadata is where concurrency issues happen a lot
  - so "scaling up" metadata access is much harder
  - and besides, it often isn't needed
Where does security come into this?
- note that security is normally enforced at the metadata level
  (i.e., when you open a file)
- but with a NASD-type architecture, data servers have to do security in another way (e.g., capabilities) or they don't do security at all
Most of the systems we'll discuss this term assume servers are trusted
- and sometimes even assume clients
- because they are for solutions within organizations where they have other
  security mechanisms protecting them from the "outside"
- thus they implement very little security within them
Traditionally we don't enforce security within the hardware of a system
- communication between storage devices and the CPU happen "in the clear"
- same idea, applied in a distributed system
- the network really is the computer
</pre>
</pre>

Latest revision as of 18:18, 6 March 2023

Notes

NASD & Tapestry
---------------

What problem(s) is NASD designed to solve?

What's the standard architecture for a file server?
 - you have a server with attached storage
 - that server reads storage then sends what it reads over the network to a client

Limitations of this approach
 - bandwidth bottleneck on the server: reading lots of disks in parallel only to copy to memory then send over the network puts huge strain on server memory system
 - also limits clients inherently, again because of bandwidth constraints

Idea: have clients talk directly to the disks, server just has to authorize access

Why aren't the disks just smaller servers?  Because they don't understand the concept of files
 - traditionally this abstraction is blocks, but NASD does something different, why?


What problem is Tapestry designed to solve?
 - routing when you don't want to retrieve information from fixed hosts
 - need to propagate messages to find who has what information
 - also, hosts will come and go, so need to deal with that

WHEN is this problem relevant?  When is it not?

How well does this kind of architecture deal with untrusted or even malicious nodes?

Big question: do you think we use these kinds of architectures so much nowadays?  Why or why not?



AFTER GROUP DISCUSSIONS

How does regular network routing work (at a high level)?

First, you have LANs
 - broadcast domains for figuring out mapping of IP<->MAC address, ARP
   - send out a message saying "who has X.X.X.X?" and someone replies "I do, you can reach me at this address (specific to WiFi, Ethernet, etc)
 - between broadcast domains, we have routing tables
   - some hosts are routers, meaning that they know how to reach
     hosts beyond the LAN
   - routers have spanning trees that map IP addresses to destinations
      - for a home router there isn't much to it, it just forwards all packets not destined for the LAN to another gateway
      - but once you get into the core of the Internet, routers have multiple interfaces and muliple ways to forward packets, and routing tables are the instructions for how to do this forwarding
      - protocols like BGP are for updating routing tables


This is all based on IP addresses, and those are uniquely associated with a host (except for NAT)

When we talk about overlay networks, we want to route based on criteria other than IP address (i.e., beyond host-based identification)

Often we want to do content-based routing, i.e., exchange information without caring who has that information, let the network figure it out
 - kind of like how when we communicate with an IP address we don't think about the path data has to flow to get to that IP address

Why are overlay networks, DHT's important for distributed operating systems?
 - we often want to route information correctly even when hosts aren't working
 - with an overlay network, hosts can have responsibilities that can be shifted if they don't respond, or responsibility can be distributed so failures don't matter (3 hosts have the info, if two are down one can still respond)

back to NASD, I said the point of the architecture is dealing with bandwidth constraints, but can't we solve that another way?
 - seems really complicated right?


Classic file server:
  - client requests data from server
  - server returns data from client

but with NASD-type systems
  - client requests info on data from metadata server (low bandwidth)
  - server tells client where to find the data
  - client communicates with data servers to get data (high bandwidth)

Why can't we just have lots of metadata servers too?  Why have fewer?
  - because metadata is where concurrency issues happen a lot
  - so "scaling up" metadata access is much harder
  - and besides, it often isn't needed

Where does security come into this?
 - note that security is normally enforced at the metadata level
   (i.e., when you open a file)
 - but with a NASD-type architecture, data servers have to do security in another way (e.g., capabilities) or they don't do security at all

Most of the systems we'll discuss this term assume servers are trusted
 - and sometimes even assume clients
 - because they are for solutions within organizations where they have other
   security mechanisms protecting them from the "outside"
 - thus they implement very little security within them

Traditionally we don't enforce security within the hardware of a system
 - communication between storage devices and the CPU happen "in the clear"
 - same idea, applied in a distributed system
 - the network really is the computer