15-410 HW2 solutions (Fall, 2003)
Question 1 - Public Key Practicum
If you did this problem correctly, in a couple of hours your hw2 directory should contain a $USER.message.decrypted file. If not, check to see if there is a $USER.ERROR file. If not, a popular problem is that you used file names other than the ones specified in the assignment, the grading script wasn't able to guess what you meant, and the grader hasn't yet had the time to manually intervene.
By the way, your encrypted messages to us were read and, generally, appreciated.
Question 2 - Being nice to users
Consider what happens in your kernel if physical memory is exhausted and a thread calls new_pages(). Hopefully, it should receive an error code and continue to run. But what if instead the thread had tried to grow its automatic stack region (by, for example, a PUSHL instruction)? There is no way to "return an error" for stack growth, nor can the thread continue user-mode execution before the stack has grown.
What should your kernel do in this case? One simple and popular solution would be to kill the "offending" thread. Aside from being easy, this might result in physical memory being freed, thus making the system less likely to kill other threads in the future.
Now consider a kernel architect who wishes to improve the service given to user programs, and decides to pursue the approach of temporarily suspending the execution of any thread which needs more automatic-stack memory until some memory becomes available.
Briefly but convincingly explain a horrible disaster that could result from a simplistic implementation of this approach.
In a word, deadlock. Each waiting thread by definition holds some number of stack pages while waiting for "other threads" to free some up for it. The automatic stack region is special in that its pages are not typically freed (can you think of two system calls which free pages in the automatic stack region?), making this an unusually strong case of "hold and wait". It is easy to show that the other deadlock conditions also apply, in a way that they don't necessarily hold for, e.g., new_pages() (why not?).
What do you think should be done to address the issue of automatic stack growth when the system is out of memory? Briefly outline two options and then argue for one. Three medium-sized paragraphs should suffice.
First, a popular non-option: "page to disk". This typically improves the situation by only a small constant factor. The traditional BSD heuristic for sizing the swap partition is twice the size of physical memory. This answer defers the problem but doesn't really address it.
Here are some options:
Other options are possible as well.
Question 3 - O_APPEND
If you examine the documentation for the Unix open() system call, you will observe that there is an optional flag called O_APPEND which modifies the behavior of all write() system calls applied to the file descriptor returned by the open() call. Append-mode opens are often used when multiple programs wish to add records to system log file.
Imagine you were porting a program which was designed to use O_APPEND to a version of Unix which does not support that flag. Briefly sketch the implementation of a C function append_write() which has the same signature as write() (i.e., takes a file descriptor, buffer address, and length, and returns a length), and which simulates, to some degree, the behavior of write() applied to a file descriptor obtained via an O_APPEND call to open(). There are at least two reasonable answers to this question.
One reasonable answer defines append_write() in terms of lseek() and write(). If you provided us with this solution you should also have explained why it doesn't exactly work, i.e., a thread can be preempted after it has used lseek() to get to the end of the file but before it can call write(), in which case two programs will write to the same area of the file, and information will be lost.
A second reasonable answer extends the first by involving flock(), lockf(), or fcntl(...,F_SETLK,...) to ensure that the scenario described above does not happen.
A third (crazy) answer might involve a central "serializer daemon" to which you would sendmsg() both the file descriptor and the data. Not so crazy would be something like the "syslog" daemon, to which you send just data (it opens each log file only once, and by convention other processes do not write to the system log files it maintains).
Of the concurrency control primitives you implemented for Project 2, which one(s) do you think a Unix-like kernel is most likely to employ internally while implementing O_APPEND? Explain, using a few sentences (or, if you wish, a few paragraphs, but a long answer should not be necessary).
Here you can justify nearly anything... since O_APPEND writes naturally involve block allocations, it is easy to imagine mutex/condition operations all over the place (e.g., to lock the free-block data structure). Since log files are (one hopes) read as well as being written, it is easy to imagine readers/writers locks being used to synchronize ownership of a file during the course of read() and write() operations.
For further reading...
Here is last semester's homework page.
[Last modified Friday April 30, 2004]