COMP 3000B 2020W Assignment 2 Solutions 1. [1] While password hashes do not store passwords, they need to be protected because they can be used to guess passwords. How are password hashes safer in /etc/shadow versus being in /etc/passwd? Specifically, who has read access to each file, and how can you verify this access? A: Password hashes are safer in /etc/shadow because /etc/passwd is world readable (permissions 0644) while /etc/shadown is only readable by root and those in the shadow group (permissions 640). Only special utilities are in the shadow group; thus, unless attackers obtain access to a privileged account they won't be able to access user password hashes. 2. [1] When we say a program binary is setuid root, what does this say about its metadata (permissions, etc.)? Please be specific as to what it does and does not mean. A: A program that is setuid root as the setuid bit set (chmod u+s), is executable by everyone, and is owned by root (uid 0). It says nothing about group ownership or group permissions (setgid or other bits) or the permissions of others except that the program is executable. 3. [3] What is the difference between running a setuid root binary and a regular binary? Specifically: a. What system call(s) change their behaviour? b. What are the changes? c. How do those changes affect subsequent program behaviour? A: execve's behavior changes, in that it changes the euid of the process to the uid of the executable file (uid 0, root). This change means that the process running the program now runs with root privileges (but still has a uid of the user who did the execve). 4. [2] Can regular processes (running as an unprivileged user) change their uid? What about their gid? For each, specify the system call that would be used and what under what conditions it would work. A: Processes with a uid of an unprivileged user cannot change their uid with the setuid system call or their gid using the setgid system call. These system calls can only be made by processes with an euid of 0 (root). Unprivileged users can change the group of one of their processes to others they are members of (according to /etc/groups), but they must use setuid root programs such as newgrp. Regular users in the right group (sudo on Ubuntu systems) can create processes with an euid of 0 using sudo which is, again, a setuid root binary. 5. [2] Install node.js on your machine. How could you use child_process.execFile() to do the following shell commands in node? Note you may need additional node statements before or after this call. a. ls -l /home A: child_process.execFileSync("ls", ["-l", "/home"], {stdio: [0, 1, 2]}); b. ls -l /home > /tmp/homefiles.txt A: var b = fs.openSync("/tmp/homefiles.txt", 'w+'); child_process.execFileSync("ls", ["-l", "/home"], {stdio: [0, b, 2]}); fs.close(b); c. sort unsorted.txt -o sorted.txt A: child_process.execFileSync("sort", ["unsorted.txt", "-o", "sorted.txt"], {stdio: [0, 1, 2]}); d. sort < unsorted.txt > sorted.txt A: var a = fs.openSync("unsorted.txt", 'r'); var b = fs.openSync("sorted.txt", 'w+'); child_process.execFileSync("sort", [], {stdio: [a, b, 2]}); fs.close(a); fs.close(b); 6. [1] How can you specify the environment variables given to the program run through child_process.execFile() without changing the environment variables of node? Demonstrate using the env command. A: child_process.execFileSync("env", [], {stdio: [0, 1, 2], env: {PATH: "/usr/bin:/bin", TEST: "A Test"}}); 7. [1] How could you make node the default login shell for a user "testuser"? A: First, add /usr/bin/node to /etc/shells. Then, run "chsh testuser -s /usr/bin/node" as the user testuser or root. (You'll have to enter testuser's password if you do it as that user.) 8. [2] When using public key cryptography for ssh authentication, where is a user's public key stored (in what file(s) and on what systems)? Where is a host's public key stored? Be sure to specify its original location and where any copies may be stored. A: The user's public key is stored locally in the user's .ssh directory in a .pub file (e.g., id_rsa.pub), one file per public key (original), and on the remote system in the .ssh/authorized_keys file, with one key per line (copies). A remote host's public key is originally stored in /etc/ssh/ssh_host_*_key.pub (with the * being the different public key algorithm, e.g., /etc/ssh/ssh_host_rsa_key.pub) on the remote host. Locally, copies of these keys are stored in .ssh/known_hosts (one key per line). 9. [2] When ssh runs commands on remote systems, does ssh directly execve the executable, or does it first call a shell which then runs the specified program? How can you verify this behavior? A: ssh runs a shell on the remote system, and that shell then runs the command. We can verify this because we can send full shell commands to the remote system. For example, if we run: ssh @access.scs.carleton.ca "ls . > myfiles.txt" the files in your home directory will be put into the file myfiles.txt on access. (There are other ways to verify this of course.) 10. [1] Why do you think that fs.lstatSync() takes a filename as input while fs.fstatSync() takes a file descriptor? Explain briefly. A: The key advantage of using a file descriptor is that you know you'll always be referring to the same file and inode; it won't change because of another process deletes and replaces the file. Thus when we can it is good to use a file descriptor. We can't do this for fs.lstatSync(), however, as you can't ever have a file descriptor refer to a symbolic link - it will always be resolved in favor of the target file. And we can't interact with inodes directly using their inode number from userspace. Thus, we have to use a filename for fs.lstatSync(). (One point for realizing we couldn't use a file descriptor for fs.lstatSync().) 11. [2] What does fs.fstatSync() return when run on /dev/urandom on your system? How can you use this return value to determine whether it is a regular file or not? A: You should get something like the following. Stats { dev: 6, mode: 8630, nlink: 1, uid: 0, gid: 0, rdev: 265, blksize: 4096, ino: 11, size: 0, blocks: 0, atimeMs: 1580998242384, mtimeMs: 1580998242384, ctimeMs: 1580998242384, birthtimeMs: 1580998242384, atime: 2020-02-06T14:10:42.384Z, mtime: 2020-02-06T14:10:42.384Z, ctime: 2020-02-06T14:10:42.384Z, birthtime: 2020-02-06T14:10:42.384Z } If you assign this to a variable s, then you can find out what kind of file it is by running s.isFile() (which will return false) and s.isCharacterDevice() (which will return true). (mode specifies what kind of file/inode this is, and for device files rdev specifies the major and minor number of the device. But rather than parse these numbers we can use the included testing functions.) 12. [2] Create a file A with the contents "Hello world!" on a line by itself. Then, answer the following: a. What node statement will create a symbolic link from A to B? A: fs.symlinkSync("A", "B"); (0.5 pt) b. What node statement will create a hard link from A to C? A: fs.linkSync("A", "C"); (0.5 pt) c. What are the similarities and differences in the output of fs.lstatSync() on A, B, and C? Note you may want to use other functions such as fs.isFile() to test the return value of fs.lstatSync(). Do all of your tests in node. A: The output for A and C will be exactly the same as they are both names for the same inode (that is what a hard link is), which is a regular file (as told by .isFile(). B will report a symlink (as told by .isSymbolicLink()) whose value is "A" (as reported by readlinkSync()). (1 pt, should show your work) 13. [2] How can you find the inode number of a file in node? How can you change the inode number of a file in node? A: You can find the inode number using fs.fstatSync() or fs.lstatSync(), they both report inode numbers. You can't change the inode number of a file directly as UNIX has no API for that; however you could copy the file: fs.copyFileSync("A", "B"); fs.unlinkSync("A"); fs.renameSync("B", "A"); Note with this copy version, the new A would not match the metadata of the old A at all, but you can copy most of those using fs.fstatSync() and then set those values using fs.chownSync() (owner and group), fs.chmodSync (permissions), and fs.utimesSync() (timestamps). Of course, updating the owner and group will depend upon the privileges node is running with. (1 pt for fstat/lstat, 1 for recognizing you can't change inodes without copying) 14. [3] Run the node commands below and answer the following questions. var f = fs.openSync("/tmp/testFile.txt", 'w+'); fs.writeSync(f, "Hello World!\n", 20000); fs.closeSync(f); a. If you view /tmp/testFile.txt in less, what do you see? Why? A: The file has 20,000 null characters (represented by ^@ characters) followed by "Hello World!" and a newline character at the end of the file. You see this because the kernel fills in a file with zero bytes when we write past the current end of file. b. What is the logical size of /tmp/testFile.txt? What is the physical size? Obtain both values just using node functions. Explain how each was determined. A: The logical size is 20013 bytes, as reported in the size property of the object returned by fs.fstatSync(). The physical size is 4096 bytes, which is the block property reported by fs.fstatSync() multiplied by 512 (because by default it reports the number of 512 byte blocks occupied by the file, as per the man page for stat). c. What system call(s) is the call to fs.writeSync() making? How do you know? A: Normally we would assume that this call is doing an lseek followed by a write; however, if we attach to the node process using "strace -p" we can see that it actually makes a pwrite64 system call, which allows for writing at specific offsets in a file. pwrite64(22, "Hello World!\n", 13, 20000) = 13 15. [1] Where are the superblocks of the root filesystem of your VM? How did you find them? A: We can find out the location of the superblocks using dumpe2fs as follows: sudo dumpe2fs /dev/mapper/COMPbase--vg-root | grep superblock The superblocks on the class VM are at 0, 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632; this will vary on other Linux systems. 16. [1] When you use sshfs to mount the home directory of your access.scs.carleton.ca account on your VM, what are two characteristics of the files and directories within that indicate that they are from a remote system and are not local? A: * link counts of all files are 1, even for directories. * file ownership is confused, as the uid matches the uid on the remote system but is displayed using the uid->username and gid->group mappings defined in /etc/passwd and /etc/group on the local machine. Thus mounting a directory from access will result in numeric uids and gids listed as there is no corresponding users or groups locally. * inode numbers are small and can be sequential, starting from 1. Normal filesystems will have larger inode numbers that are much less regularly distributed. This difference is because sshfs makes up its own inode numbers. 17. [3] Create a filesystem, erase blocks, and repair it in a way that causes at least one file or directory to be put into the filesystem's lost+found directory. Give commands for creating the filesystem, populating it with files, erasing key blocks, and repairing it. A: dd if=/dev/zero of=theDisk bs=4096 count=100000 (make the "device") mkfs.ext4 theDisk (create filesystem, format "device") sudo mount theDisk /mnt (mount filesystem) sudo rsync -a /etc /mnt (populate the filesystem) ls -lai /mnt/etc (to find inode numbers) sudo umount /mnt (unmount the filesystem) dumpe2fs theDisk (to find inode locations) dd if=/dev/zero of=theDisk bs=128 conv=notrunc count=1 seek= (to erase one inode, see below) fsck.ext4 -y -f theDisk (to force disk repair) sudo mount theDisk /mnt (remount filesystem) sudo ls /mnt/lost+found (to see the lost files and directories) The tricky part is to figure out the offset of the inode to erase. In ext4, the filesystem is divided into groups, and each group has an inode table. See: https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Finding_an_Inode To find the inode group of an inode, integer divide it (minus 1) by the number of inodes in a group. The inodes offset in that group is the inode number (minus 1) modulo the number of inodes in the group. If we set the block size to the inode size, then the offset for dd will be: (inode table block index) * (block size / inode size) By default for smaller filesystems, ext4 has a block size of 1024 bytes and an inode size of 128 bytes. Worked example. /mnt/etc/default's inode is #32795 (32795 - 1) / 2048 = 16, so in group 16 according to dumpe2fs, Group 16's inode table is blocks 131105-131360 (32795 - 1) % 2048 = 26, so our inode is at offset 26 in this table If we use block size of 128 for dd (the size of an inode), then we seek 131105 * (1024 / 128) + 26 = 1048866 We thus run dd if=/dev/zero of=theDisk bs=128 conv=notrunc count=1 seek=1048866 to erase inode 32795. (Note, to get full credit your commands just have to produce files in lost+found; you don't have to calculate things precisely, you can do it by trial and error.)