3.3 - Models of OS Protection

3.3 - Models of OS Protection

Copyright(c), 1990, 1995 Fred Cohen - All Rights Reserved

The first basic theory of OS protection was proposed in 1976 by Harrison, Ruzzo, and Ullman [Harrison76] , and several proofs were given to demonstrate that many desirable policies were impossible to implement. The theory is based on the partitioning a system into a set of subjects and a set of objects. Subjects are active elements like people and programs acting on their behalf, while objects are passive elements like data files, disks, memory, and the representation of programs on disks. A set of generic rights are associated with the abilities of subjects to operate on objects, and an access matrix is used to keep track of which rights each subject has to each object. An example matrix is given in Figure 3.1.

          File1     Box9     Array8     Wire9     Disk3     Case7    (Objects)
         --------------------------------------------------------
Alice    | W,E      E,X      R,W,X      D         R,W,X     E,X,D
Bob      | R,X,D    R        W          W,E       X         W
Carol    | B        B        B          B,W       B         B,R
Danny    | S        S,W      S          S,R       S         S
(Subjects)

          rights = Write, Read, eXecute, Encrypt, Decrypt, Backup, reStore

Figure 3.1 - An Example Access Matrix

In this example, only Carol can perform backups, but only Danny can restore from backups. Neither of them alone can perform all operator tasks. Bob can encrypt Wire9, in which case Alice can decrypt it, but not read it. Any rights the designer wishes to consider can be modeled in this way. Typically they include 'read', 'write', 'execute', 'create', 'destroy', 'copy', 'modify', and 'extend'. An example of rights that may be desirable, but that aren't usually considered are 'decrypt', 'backup', 'restore', 'correct errors', 'maintain', 'authenticate', 'sign', 'mail', and 'share'.

It is fairly simple to form such a matrix and make a set of rights which seem appropriate to many applications, but unfortunately, protection is not that simple. Harrison, Ruzzo, and Ullman proved that in the general case, given an initial access matrix, it is impossible to determine whether or not a system will eventually grant a right that is not in the initial access matrix. Even if an OS correctly implements the protection system, there may be a sequence of events, such that a subject "S" gains a right "R" to an object "O", even though R was not initially in (S,O).

As a more concrete example, it has been proven that in a general purpose computer system, the rights to read and write are transitive [Cohen86-2] . That is:

if A can read from (write to) B
and B can read from (write to) C,
        then A can read from (write to) C

From Figure 3.1, we observe that even though Alice is not given Read access to Case7, Carol can read Case7 and write Wire9 Danny can read Wire9 and write Box9, and Alice can read Box9. There is thus a path through which Alice can eventually attain read access to Case7, even though the access matrix doesn't explicitly specify this right. As we use more complex types of access rights, their implications become harder to analyze, and we are less and less certain of the effects of decisions.

The major problem with this protection theory is that it is too general for most people to use, and too complex for most people to truly understand. Another bothersome point is that it portrays only positive rights of subjects over explicit objects, and doesn't allow negative constraints or constraints over classes of objects. As an example, we would like to be able to theorize systems in which denial of services is made impossible by preventing subjects from writing more than a certain number of objects, without explicitly specifying all subsets of objects writable by that subject.

The Bell-LaPadula (B-L) security model [Bell73] was developed at the government funded Mitre Corporation to model military security for computer use. This models systems in which the sole goal is to prevent the leakage of secrets. The basic concept is that an unclassified user should not be able to read classified information, while a classified user should not be able to write classified information into an unclassified area.

The B-L model identifies a set of "security levels", an ordering relation between them, and two rules which, if properly enforced, have been mathematically proven to prevent information at any given security level from flowing to a "lower" security level. The security levels are typically "unclassified", "classified", "secret", "top secret", "even the name is secret", etc. The rules are called "the simple security rule" and "the *-property". The simple security rule states that a subject at a given security level Y cannot read information from a level X if Y is a lower security levels than X. The *-property states that a subject at a given security level X cannot write information to a level Y if Y is a lower security level than X. The first rule is often called "no read up", while the second is often called "no write down". Figure 3.3 shows a pictorial of the B-L "security" model.

        Security Model
        --------------
    high|////////////|
    ... |//no read///| simple security rule
    n+1 |////////////|
    n   |            |
    n-1 |\\\\\\\\\\\\|
    ... |\\no write\\| *-property
    low |\\\\\\\\\\\\|
        --------------
    \\\ = no write

/// = no read Figure 3.3 - The Bell-LaPadula Security Model

It turns out that the enforcement of this policy in a computer system is quite straight forward so long as we don't try to do it precisely. A precise system of this sort is infeasible because we would have to trace every use of every bit of information [Denning82] . Instead, we normally make conservative estimates to allow us to implement the policy on a real system. When we don't maintain precision, there is a tendency for information to become more and more highly classified with time, until everything is classified at the highest security level. To avoid this problem, most systems of this sort permit a the policy to be violated by a particular trusted subject.

The Biba integrity model [Biba77] was published at Mitre one year after the B-L model. When Biba noticed that the B-L policy did not provide protection against a user at level X writing information at level Y when X was a lower security level than Y. Thus a low security user could overwrite highly classified documents unless some sort of integrity policy were in place.

Biba chose the mathematical dual of the B-L policy wherein there are a set of integrity levels, a relation between them, and two rules which, if properly implemented, have been mathematically proven to prevent information at any given integrity level from flowing to a higher integrity level. Typical integrity levels are "untrusted", "slightly trusted", "trusted", "very trusted", "so trusted that we don't need a higher level of trust", etc. The first rule is that a subject at a given integrity level "X" cannot write information to another integrity level "Y" if X is lower integrity than Y. This rule assures that low integrity subjects cannot corrupt high integrity subjects (called "no write up"). The second rule is that a subject at a given integrity level "Y" cannot read information from another integrity level "X" if X is lower integrity than Y. This rule assures that high integrity subjects cannot become corrupt by reading low integrity information (called "no read down"). Figure 3.4 shows a pictorial of the Biba integrity policy.

          Integrity Model
          --------------
    high  |\\\\\\\\\\\\|
    ...   |\\no write\\|
    n+1   |\\\\\\\\\\\\|
    n     |            |
    n-1   |////////////|
    ...   |//no read///|
    low   |////////////|
          --------------
    \\\ = no write    /// = no read

Figure 3.4 - The Biba Integrity Policy

Because the integrity policy is the dual of the security policy, any restrictions that apply to one apply to the other. Thus the precision problem remains, and information tends towards the lowest possible integrity level.

If we desire both secrecy and integrity, we will have a system where information tends towards the highest security level and the lowest integrity level, or in other words, we will have a great deal of highly classified, low integrity information (depending on your viewpoint, this may or may not be supported by historical data). Another side effect of combining secrecy with integrity is that the system tends towards a set of isolated subsystems. The simplest example of combining the B-L and Biba policies is shown in figure 3.5.

           -----    -----   -----
    n+1    |///|    |\\\|   |XXX|
    n      |   | +  |   | = |   |
    n-1    |\\\|    |///|   |XXX|
           -----    -----   -----
\\\ = no write

/// = no read

XXX = no access Figure 3.5 - Combined Security and Integrity

As you see, the result is that no communication is possible between users at different levels. Such a system is quite safe in that it isolates users from each other, but doesn't afford any sharing whatsoever. One problem with eliminating sharing its that it defeats the purpose of having users together on the same system. Onother more important problem is that it means a tremendous duplication of effort. For example, a user in one area could not use an editor written by a user in another area. Each area would have to separately embark on research and developement projects ignorant of the results of colleagues.

Another policy comes directly from the military requirement that information be accessible only on a need to know basis. "Compartments" are designed to allow access only for those with a need to known information as determined administratively. The compartment policy can be defined mathematically as a set of compartments "C", with a subset "As" of C associated with each subject "s", and a subset "Bo" of C associated with each object "o". Any access request to object o by subject s is denied unless for all c in Bo, c is in As.

The lattice policy was developed as a result of the fact the set of security levels under B-L, integrity levels under Biba, and compartments under need to know, could be generalized to a single less restricted mathematical structure without sacrificing any of their desirable properties [Denning75] . A lattice is a structure with an infemum "INF" a supremum "SUP", a set of other places, and a less than relation "<". The INF can directly or indirectly send information to any other place in the structure. The "SUP" can directly or indirectly observe information in any other place in the structure. Other places can observe anything that is directly or indirectly less than them and send information to any place that is directly ir indirectly more than them. Figure 3.6 shows an example of a security lattice and its corresponding subject/object matrix, wherein information flows from the bottom towards the top.

A Security Lattice                Corresponding S/O Matrix

                                                 Objects
SUP     [a]                               a  b  c  d  e  f  g  h
        / \                   S        a  rw r  r  r  r  r  r  r
     [b]   [c]                u        b  w  rw -  r  -  -  r  r
     |     / \                b        c  w  -  rw -  r  r  r  r
     [d] [e] [f]              j        d  w  w  -  rw -  -  r  r
      \  /  /                 e        e  w  -  w  -  rw -  r  r
       [g] /                  c        f  w  -  w  -  -  rw -  r
        \ /                   t        g  w  w  w  w  w  -  rw r
INF     [h]                   s        h  w  w  w  w  w  w  w  rw

Figure 3.6 - A Security Lattice and its Subject/Object Matrix

Since any policy that could be formed by a combination of the former three policies could be formed with a lattice, the generalization held for a long time. The lattice policy was further generalized to a POset policy [Cohen86] which is just like a lattice without requiring a SUP or INF. This generalization is applicable to systems where no subject need be able to read or write all information. This is particularly useful in discussing information networks where there is no single authority. Hierarchical policies where local policies are enforced at different levels of control have also been explored [Cohen87-2] . An example POset is shown in figure 3.7, with information flowing from left to right.

        a--c-----h--k  m
         \         /  / \
        b--d--f--i--l--n  p--q
         \   / \         /
           e--g  j-----o

        r--t--v--x--y
            \   / \
        s--u--w-----z

Figure 3.7 - An Example POset