A Decentralized Model for Information Flow Control Andrew C. Myers / Barbara Liskov (summary by Larry Greenfield) I. Goals of Security The authors claim that the goals of security are: - preventing accidental or malicious destruction of information - controlling the release/propagation of information Trust is fundamentally transitive. They provide a model that attempts to prevent user A from redistributing user B's data. Some of the examples of what a fine-grained security model should try to accomplish are: - a medical study, where the patients are willing to trust a data extractor to anonomize the data. - a bank, where the customers don't want to give the bank the ability to easily redistribute information about their individual account, but the bank needs the ability to distribute aggregate information In each scenario, people need to trust a piece of code or trusted authority to manipulate and reclassify their data, but need not trust the rest of the code or other authorities to perform correctly. II. Decentralized Information Flow Control In this model, the users and authorities are principals (possibly organized into a hierarchy). We assume that a process has the authority to act for a set of principals, and processes manipulate values by storing them and reading them from memory slots, or reading and writing them to communication channels (to other processes). Every value has a label, or its security class. The slots and channels have constant labels and cannot be relabeled. Each label L has a set of principals owners(L) called the owner set. For each owner, the set of readers readers(L,O) is the set of principals the owner is willing to release the data. The intersection of all reader sets forms the effective reader set---the set of principals who can actually read the data. Whenever a value with label L1 is assigned to a slot or sent to a channel with label L2, the label L2 must be a restriction of L1. That is, owners(L1) must be a subset of owners(L2), and for each owner O in L1, readers(L1, O) must be a superset of readers(L2, O). This ensure that no information is leaking---there must be more owners and less readers, so things get more restrictive. Likewise, when a value results from an operation performed on two or more other values, we union the labels. Declassification is also possible: if a process acts for an owner O, then it can add readers to readers(L, O) or remove itself from the owners set. (One important fact is that one owner need not know about the desire of other owners to declassify information.) The authors go on to present a method of statically checking labels to ensure correctness. The involves analyzing all values used to reach a basic block (since the values assigned there must be assumed to depend on all values looked at getting there) and checking the labels. It is also possible to infer labels this way, and makes writing the actually code significantly easier---only declassification needs to be done explicitly. This checking done at compile time ensures that no information is leaked by failing. They also make a provision for run-time checking for information that cannot be determined at compile time, such as information read off of the disk and labeled then. This can leak information by failing, but may be necessary in some cases. Additionally, the scheme outlined in this paper can leak information through timing.