Carnegie Mellon
SCS logo
Computer Science Department
home
syllabus
staff
schedule
lecture
projects
homeworks
 
 

15-410 HW2 solutions (Fall, 2004)



Question 1 - Public Key Practicum

If you did this problem correctly, in a couple of hours your hw2 directory should contain a $USER.message.decrypted file. If not, check to see if there is a $USER.ERROR file. If not, a popular problem is that you used file names other than the ones specified in the assignment, the grading script wasn't able to guess what you meant, and the grader hasn't yet had the time to manually intervene.

By the way, your encrypted messages to us were read and, generally, appreciated.


Question 2 - Virtual Memory

The motto of this question is "Nobody's perfect!". It's pretty clear how it applies to the data structure described, since it can't hold enough mappings for a 32-bit address space. This means that processes with fully-occupied address spaces, or processes using "too many" pages which hash to the same bucket, will incur translation-missing faults even if all of their pages are backed by frames already in memory! The page fault handler will then need to evict some translation from the bucket, fill in the currently-required one, and restart the instruction.

But wait...how many processes have fully-occupied address spaces? Here are some actual process sizes (virtual address space and amount resident in RAM) from my FreeBSD laptop.

programvirtual (KB)resident (KB)
opera118,00054,988
XFree8638,2524,920
xterm3996804
ratpoison1,984524
syslogd944252
usbd91684
moused91280

So in many cases the amount of user address space being mapped looks a lot more like what runs on a 15-410 kernel than you might have expected. What about the kernel address space? Here there is good news and bad news...it's typical for kernel address spaces to be much larger than 16 megabytes. But most programs while running in kernel mode use only a tiny fraction of kernel virtual memory... as just one example, it's very rare for one thread to reference another thread's kernel stack. Also note that it's very easy for the page fault handler to insert missing kernel-page translations, since there is no need to look up the missing physical address in any sort of table--it can almost certainly be computed as falling into one of a tiny number of direct-mapped regions (in 15-410 kernels, that number is one). Overall, the bad news is that the proposed structure can't map a packed address space all at the same time, but this is a rare enough requirement that maybe it's only an imperfection.

Is there anything good about this approach? If we focus on small-ish sparsely-populated user address spaces and linearly computed kernel address spaces, we see that frequently a small hash table (one or two pages) will be able to map an arbitrarily-shaped sparse address (e.g., one with stuff at the bottom, stuff at the top, and a sprinkling of stuff in the middle, such as text/data, stack, and several shared libraries).

Futhermore, any time the kernel feels that too many translation-missing faults are happening, it can copy the entries from the existing hash table into a larger one. The proposed mapping approach is likely to work well in embedded-system environments (PDAs, phones, ATMs, vending machines...), which typically have limited amounts of physical memory and small process address spaces (kernel and user).

Even if there aren't false translation-missing faults, the kernel can switch to a larger hash table to decrease the time a successful lookup takes: the x86 two-level map means that translations missing from the TLB will require two memory reads, whereas a large enough hash table will frequently require only one (when the translation occupies the first PME in the bucket). Since the page fault handler has complete control over the hash table, it can arrange for important translations to always be present in their buckets and/or to occupy early PMEs within their buckets.

In summary, for small or medium-sized "mapping working sets", the proposal allows the kernel to trade off mapping space against mapping time in ways which the x86 structure cannot.

Aside from the space/time tradeoffs of the two approaches, one advantage of the x86 map is that it is possible for a kernel to rely on a page directory and set of page tables, including the "spare" bits, as the sole encoding of a process's region list, VM flags, etc. However, as I believe I argued in class, portable operating systems won't want to do that, so this advantage may be theoretical rather than real.

By the way, the proposed mapping structure is a stripped-down version of what many PowerPC processors use. The translation unit on G4-series CPUs uses eight PME's per bucket, two hash functions (the bucket indicated by the secondary hash is probed if the primary hash's bucket's PME's fail to provide a translation), and a more-complicated way of specifying the number of buckets, but the result is still a partial function. Some PowerPCs support software-managed TLB's, i.e., there is no hardware mapping unit and all TLB misses result in faults. On other models (including, I believe, G5's), the kernel can choose between the hardware-interpreted hash table and software TLB loading.

Meanwhile, models before the G5 have a totally separate translation mechanism, called BAT, which can map a small number, (e.g., 8) of huge extents of sequential physical memory into the virtual address space. Mapping an entire 16-meg 15-410 kernel would be accomplished with one BAT register. Since BAT mappings and hash-table mappings can be combined, it is common for the kernel to be mapped with one or two BATs while user-space is mapped via the hash table mechanism. Whee!

If you were stumped by this question, you might want to go back to the lecture slides on VM. The mapping challenge isn't figuring out how to build a data structure which can map a fully-occupied address space in the most straightforward way...the game is a different one.


Question 3 - Storage

Spinning disk platters twice as fast will reduce rotational delay and increase transfer rate, but not reduce seek latency. This might be good for "streaming" applications, such as high-resolution pay-per-view movie servers. This approach probably wouldn't do much for seek-intensive loads such as transactional databases or desktop PCs.

In the other direction, spinning the disk twice as fast will cause the disk to be noisier and consume more power. The bearings will probably wear out twice as fast (or faster), too. Again this is fine for a movie server but not so fine for databases or desktops.

Adding a 5-second power reserve would allow a disk to flush some number of sectors from a RAM-based write buffer out onto the platters. This would make it legal, or at least defensible, for the disk to declare to the OS that operations had completed, which would in turn allow it to do more internal re-ordering based optimizations. This wouldn't help a video server at all, since they don't typically write anything, and wouldn't help data-center servers much, since they often have battery backup units. Servers also are more likely to run sophisticated disk scheduling mechanisms inside the OS or database (application), reducing the utility of having the disk do that. But this approach might be useful in a desktop environment, especially in places where electricity is unreliable.

Which is better? Personally, I think the second one is more interesting, but then again I'm not in management at a successful hard-disk manufacturing company...


For further reading...

Here is last semester's homework page.


[Last modified Friday April 29, 2005]