Remote Files

From Soma-notes
Jump to navigation Jump to search



To ensure that a piece of data is accessible only to certain privileged users, it can be encrypted using cryptographic algorithms which rely on a 'key' to grant access to the data in a usable form.

Block ciphers are an example of symmetric-key cryptography. Symmetric-key cryptography uses the same key to encrypt the data and decrypt it.Block ciphers are advantageous in that they are quick to encrypt/decrypt a block of data, and work well in scenarios where the data is being encrypted/decrypted on the same system or at least by the same user, or when it is possible to securely share the secret symmetric key with another trusted user so that they can access the data. The symmetric-key infrastructure does not lend itself well to communications applications; ie. how to communicate the secret key securely when the recipient does not know any useful decryption keys.

A solution to the problem of sharing keys is the public-key cryptography model. This model implements a unique pair of keys for each user, a private key and a complimentary public key. Public keys are usually made available to the public via a (personal) web page or potentially uploading it to a dedicated key server. This model enables protocols that can ensure data confidentiality, integrity, and authentication.

To ensure the confidentiality of data being transmitted from user A to user B, user A would obtain user B's public key and use it to encrypt the data to be shared. Then when the data is sent to user B, only user B can decrypt the data with their private key (or possibly any user who somehow has access to user B's private key).


The most straight forward way to check the integrity of data is to do a bit-wise comparison against a trusted source (testing each individual bit for sameness); this is obviously inefficient for large data sets. A slightly better solution is to use a CRC (cyclic redundancy check) such as a parity bit/flag attached to the data when it is transmitted, or a hash function, both of which strive to account for single bit discrepancies between copies of data.

Unfortunately, regular hashes tend to be easy to imitate – it is not difficult to create a message m', whose hash H(m'), is equal to the hash H(m) of message m. Secure hashes are special cryptographic hashes which are theoretically infeasible to imitate using modern technology given the remainder of time in the existent universe (or less accurately, impossible to break). So if a user is given a piece of data and a secure hash of said data, they can verify its integrity by comparing their own computed secure hash of the data with the hash provided with the data.

Often this is how software companies enable users to verify the integrity of data downloaded from their servers. The hash of the file (currently usually computed using the SHA-1 or MD5 algorithm), is published near the link to download the file; after downloading the file a user can compute its hash and compare it against the one given by the publisher of the software for sameness.

Availability (Authentication)

A digital signature is a type of secure hash that not only verifies a piece of data has not changed (ie. not been tampered with), but assures the data was generated by the entity associated with the signature.

A certificate is a form of digital signature, which is really just a public key plus meta data. An example usage is a Microsoft Windows user downloading a patch for their OS from Microsoft. Microsoft creates a certificate by encrypting the hash of the patch using their private key. This encrypted hash and Microsoft's public key are packaged as a certificate and bundled with the patch to be downloaded. The user's OS then uses the public key to compute the hash of the patch and compare it to the hash in the certificate.

In order to break this authentication scheme, a malicious user would need to either obtain (steal) the private key, crack the public-key algorithm (ie. reverse engineer the private key), or imitate the secure hash function (to produce a malicious hash equivalent to the good one). Given the importance of the private key in this system, it ought to be secured. This can be done by applying a block cipher to the private key, since the key itself is just a file.

An unresolved issue with the public-key cryptography infrastructure is key management. How does a user know for sure what they believe to be the correct key, is in fact the valid key? A recipient can never be perfectly certain of a key's authenticity, meaning the degree of trust in its authenticity is only as good as the degree of trust in the key's source.

Trusted Computing

The goal of trusted computing is to prevent unauthorized (malicious) activity on a system. The real focus is in not trusting the user; the user is most frequently the mechanism that allows malicious code to enter a system. The solution lies in closing any holes in the platform that might be used as entry points.

Entities vulnerable to intrusion include applications, the OS itself (including device drivers), the boot-loader, potentially the hypervisor, and ultimately the hardware. Implementing restrictions at the application level is not universal enough; too difficult to implement for all applications independently. Cryptographic checks must be at each level of the system as it is loaded to create a chain of trust (is the BIOS signed properly? -> is the boot-loader signed properly? -> ...).

Remote Attestation

The major design purpose of trusted computing is remote attestation, detection of changes to a user's system. Commodity OS's generally do not support a trusted computing environment. Microsoft attempts to simulate a trusted system when it performs its Window's Genuine Advantage (WGA) check. Windows uses the trusted platform module (TPM), a tamper-resistant chip in the system's hardware which has its own unique public/private key pair, encoded into the chip when it is manufactured. The TPM has full privileged access to all memory in the system, its contents are secret and cannot be modified, so it can be used with some level of confidence to verify digital signatures. Although this allows the system to develop a chain of trust, it is not perfect and can be compromised. For example, gaming consoles such as the Xbox 360 are primary adopters of a TPM (and similar) mechanisms to both enable copy protection measures, and also partly to prevent cheating – users who cheat while playing online usually do so by modifying a game to enable a previously unavailable functionality. TPM's are capable of preventing such activity.

Implementing secure hash checks at each layer of a system as it is loaded is inherently complicated. Once the system is operating at a highly abstracted level, the mechanism must determine such things as which pages in memory are application code, which are data, what pages belong to which processes and so forth. Such complexity can lead to vulnerabilities in the protection mechanism itself. For example, with the Xbox 360 again, a user can create a saved game file that contains code which causes a buffer overflow in the Xbox. If the console allows users to load arbitrary saved game files, the user could load their malicious file and take over root control of the system when it attempts to handle the system error. Then the rest of the code in the file could, say, load a Linux OS onto the console and present a terminal.

Blu-ray/HD DVD

Blu-ray and High Definition DVD discs conform to the Advanced Access Content System (AACS) standard of content distribution and digital rights management. The specification demands that arbitrary applications cannot access the protected data when it is in unencrypted form. It implements a different secure hash checking model in place of the TPM. This forces the OS to be divided into trusted and untrusted components. Even processes with full privileges are not necessarily able to access the memory holding encrypted data. The trusted portion of the OS is what is trusted by the movie studios who publish their movies in Blu-ray or HD DVD format.

The system is still vulnerable at the hardware level; a user could potentially write code that interacts with the video card buffer to simply access the protected data as it is buffered to the output device. Device manufacturers try to make this as difficult as they can by obfuscating the device driver code, which makes it much harder to reverse engineer (and must be done before developing code that can interact with the device).

Media players that handle AACS content are extremely demanding on system resources. For example, Windows Vista performs multiple secure hash checks for each frame of content played. All these hash checks can monopolize the processor and Microsoft made a design decision to scale back networking while AACS content is being played (reducing network traffic locally by approximately 90%). Microsoft made another design decision with Vista that some claim make the OS less secure overall. In Vista, MS implemented a 'kill switch'; if the OS thinks the content being played is pirated (even if it is not), Vista will go into a severely reduced usability mode. This example illustrates that software developers need to consider all possible ways their code might be used, particularly ways in which it is not intended to be used, to avoid potential security vulnerabilities. However, this is often not the case in the software development industry currently. With Vista an attacker could potentially disable a target system by simply having it play a piece of pirated media, a simple task compared to obtaining root access.

To summarize all of the tools mentioned above: the basic usage of computers is to copy information; the infrastructures discussed are used to control the copying of information in computer systems, or in another sense, control the flow of information in computer systems.

Distributed File Systems


The simplest form of a distributed file system involves just manually transferring files between hosts, using ftp, scp, or something similar. This isn't a truly distributed system however. Ideally local changes should be reflected on all other participating systems (without explicit actions required by the user).

The obstacles to implementing a distributed file system are not security related, file systems can be encrypted locally and communication can be secured. The problems are functional in nature, such as defining the namespace of the system (is it uniform?), and organizing the namespace (are file names based on location? user? hash? some abstract entity?).

The World Wide Web is an example of a distributed file system. It implements a uniform namespace, however the organization of its namespace is quite varied.

Posted by: A Krohn [100260483]