Carnegie Mellon

Computer Science Department |
 |
 |
 |
 |
 |
 |
 |
|
|
|
15-410 HW2 solutions (Spring, 2008)
Question 1 - Public Key Practicum
If you did this problem correctly, in a couple of
hours your hw2 directory
should contain a $USER.message.decrypted file.
If not, check to see if there is a
$USER.ERROR file.
If not, a popular problem is that you used file names
other than the ones specified in the assignment,
the grading script wasn't able to guess what you meant,
and the grader hasn't yet had the time to manually
intervene.
By the way, your encrypted messages to us were
read and, generally, appreciated.
The message with the best technical content was
Robert Enderlein's (though we are officially too
paranoid to trust it).
The funniest message was Rebecca Levin's
(we hope you find your grandmother,
or Ellen DeGeneres does).
Of course we're not going to reveal the messages--they're
secret!
Question 2 - Virtual Memory
Part A
Imagine that a program successfully stores a value into some
memory page but later receives a page fault while trying to
retrieve that value. Consider the case where the page fault
exception is raised because the "present" bit is turned off
in the relevant page directory entry. Assume that
the address is still logically valid, i.e., the program is
entitled to fetch from that address,
the kernel will fix the
problem without intervention from the program, and the
instruction will eventually succeed when retried.
This situation strongly suggests that the kernel
embodies a particular feature, design decision,
or practice. Explain--why was it sensible for
this fault to happen?
If we take a page fault because a page-directory present
bit is off, this suggests that the page table is not
present in memory. This can happen if the kernel evicts
page tables from RAM. Since a complete set of page tables
can occupy up to 4 megabytes per address space on a 32-bit
x86 machine, this may
be useful--especially if many of the pages belonging
to the address space have previously been evicted.
Consider a large program such as a web browser which is
idle for a long time. The program's pages will become
not-recently-used and will probably be evicted, clearing
present bits in the relevant page tables. Once all
present bits in a particular page table are cleared,
the remaining information in the page table (perhaps
the reserved-for-OS bits, or perhaps disk sector addresses)
is not of immediate use, so the page table can be evicted
and the corresponding present bit in the page directory
can be cleared.
Part B
Briefly sketch the sequence of operations the kernel will
take in response to this fault.
- The kernel will classify the fault, doing whatever work
is necessary to convince itself that the address is legal
(e.g., consulting a region list).
- The kernel will examine the page-directory entry and
find the present bit off.
- The kernel will allocate a frame--hopefully from a
buffer of unused frames,
but potentially by going to sleep
waiting until a frame is released by some other
process or until the page-evictor daemon successfully
writes a page to disk.
- The kernel will "somehow" determine which sectors of
which disk(s) contain the missing page table (we will
gloss over this part).
- The kernel will queue a disk read, which might
include as a side effect
sending a command to the disk if the
queued read command ends up being the first entry in the queue.
- The kernel will suspend execution of this thread
until the I/O completes--this involves activating a
different thread.
- Later, a disk interrupt will arrive, and the
suspended thread will be marked eligible for
execution.
- The suspended thread will fill in the page-directory
entry (note that the page table is not likely to occupy
the same frame that it did before eviction), including
setting the present bit.
- At this point the thread might return to user mode,
which would restart the instruction which had incurred
the fault...
but there is an argument that the thread should do further
work before declaring victory back to user mode... can you
see what the argument is?
Question 3 - Non-blocking send()
Consider the example presented on slide 20
of the "IPC/RPC" lecture.
Note that the send() operation in
this example is "non-blocking",
meaning that it is not defined as blocking until
a matching receive() operation is posted
(for those of you doing Project 4, this is not the same
send() operation as the one you are implementing).
Part A
Draw a picture of a deadlock which may result in
this scenario.
You may wish to make explicit assumptions about the
number of objects in the system (e.g., assume a
certain number of processes).
In this system send() operations do not
necessarily block until a matching receive()
is issued.
Thus send() operations must buffer
messages--this is a popular practice,
embodied in systems as diverse as Mach IPC and
Unix pipes.
However, buffering messages requires buffer allocation,
which may itself block (slide 19 of the lecture explicitly
says that send() operations block due to
buffer allocation).
Consider a system with two processes and one buffer.
- Process 2 allocates the buffer, inserts a message
into the buffer, and queues the buffer onto Process 1's
queue. Now P2's
send() operation is complete.
- Process 2 begins its
receive() operation,
observes that no buffer is queued for it, and blocks.
- Process 1 attempts to allocate the (sole) buffer. But the
buffer is already allocated to Process 1 (if we consider
a system with a buffer pool consisting of one buffer,
Process 1 is blocked on the buffer pool and everything
in the buffer pool is allocated to Process 1).
Is Process 2 in a deadlock? It seems as if the
answer should be "yes"...
but observe that nobody is waiting for Process 2...
Is a system with two processes and one buffer "realistic"?
Not really--but you can't fix this IPC design with a rule like
"make sure you have one buffer for every process".
That would be useful if every process would send()
only once before calling receive() ,
but as soon as you relaxed along would come a set of processes
with a usage pattern like this.
send(P1, ...);
send(P2, ...);
send(P3, ...);
receive(P1, ...);
receive(P2, ...);
receive(P3, ...);
Part B
Which deadlock management approach would you suggest
be applied to this system? Why?
Good question! There are serveral interesting answers,
most of which are unsatisfying.
- Deadlock prevention
- Ban exclusive resources - but overwriting messages already in a buffer
seems like a dead end.
- Ban hold & wait - this would require each process to
receive() all pending messages before being
allowed to send() one... this sounds weird.
- Ban non-preemptible resources - It would be possible to
escape from deadlock by always hijacking a random buffer
instead of blocking. But it would be hard to write code
relying on such an IPC infrastructure, because sending
a message would provide no guarantee it would arrive.
- Ban wait cycles - In some circumstances it would be possible
for applications to do this... if we partition processes into
clients and servers, and require that clients send requests only
to servers, and that servers don't send requests to each other
in a cyclic way, then the fact that clients always
receive() promptly after just one send() ,
plus the fact that servers always receive() promptly
in their main loop, would seem encouraging... too bad some
applications are inherently
peer-to-peer instead of client-server.
- Deadlock avoidance - This actually shows promise. Each
process could pre-declare the maximum number of buffers it
will have outstanding due to
send() operations before
it begins to run receive() . Then an intelligent
resource allocator in the kernel could refuse new processes
at declaration time if their needs are impossible to serve
given the number of kernel buffers, and could put processes
to sleep in send() before allocating
buffers.
- Deadlock detection - this could work... though it's
important to see that "just return -1 to some
send()
when you detect a deadlock" isn't a sufficient strategy:
the applications would need enough logic to yield buffers
already allocated--perhaps by destroying ports with pending
messages still attached to them...
Speaking of Unix pipes and sockets, how do they handle deadlock?
They don't--it's up to the application. Be careful out there.
For further reading...
Here is last semester's homework page.
|