In this paper, we examine the effect of combined security and integrity on information flow in a computer system or network. We trivially extend integrity levels to integrity lattices, and show that the combination of security and integrity lattices results in a partitioning of systems into closed subsets under transitivity. We generalize the integrity and security lattices to a simpler flow model, demonstrate some common pitfalls in administration of such systems, and show that the most general structure required for representing information flow in a general purpose transitive information system is a partial ordering. We show means for calculating information effects under this model, demonstrate sample calculations, and explore the effects of time on proper maintenance of controls. We then explain the design of an automated administrative assistant for protection administration in systems and networks under this model and introduce a provable rule based system for maintaining security, integrity, compartments, and other flow restrictions in administration of a flow control matrix.
The "security" model of protection in a computer system was the first sound mathematical model of information flow that allowed proofs of mathematical properties to be used for establishing the security of a computer system [Bell]. The basic structure of this model is a linear relation on a set of "security levels" that is used to prove that information can only flow in one direction through levels, and thus to prove that information entering a "higher" security level cannot "leak" to a "lower" security level.
A generalization of the security model to a lattice structure was first introduced by Denning [Denning75], who noted that the linear relation could be generalized to a lattice structure in which "higher" and "lower" in the security model are mapped into supremum (SUP) and infemum (INF) respectively in the lattice. This affords the same degree of assurance and mathematical soundness as the security model, and allows more general information flow structures to be used. The lattice facilitates more accurate modeling of many real world situations, most notably the situation where many different "compartments" may exist at the same security level without information flowing between them.
A very sound basis for limiting this generalization to a lattice structure is that, in any single processor, the hardware has access to all information, and thus there is a SUP whether we like it or not. Although this policy seems suitable for a single processor where there is necessarily a SUP, in a computer network, there is no such physical restriction. We should be able to exploit this physical generality with a corresponding mathematical generalization.
At about the same time as the lattice model was produced, it was shown that the dual of the security model could be used to model the "integrity" of information in an information system [Biba]. The basic structure of this model is a linear relation on a set of "integrity levels" that is used to prove that information can only flow in one direction through those levels, and thus to prove that information in a "lower" integrity level cannot "corrupt" information in a "higher" integrity level.
In implementation, policies are most often modeled by the "subject/object" model in which each of a set of "subjects" has or does not have each of a set of "rights" to each of a set of "objects" [Harrison]. The "configuration" of the rights at any given moment are maintained in an "access matrix", and thus the rights of subjects to objects may be modified by modifying this matrix. By properly restricting the configurations to only those which fulfill the desired policy, and properly implementing the model in hardware, we presumably implement a provably secure computer system to meet the specified policy
Figure 1 shows examples of the security and integrity models
of information flow. In the security model, a subject at level "n"
cannot read information at a level "i" s.t. i>n, or write information
to a level "i" s.t. i In figure 2, we show an example of a lattice based system and
a corresponding access matrix. The generic rights in the access
matrix for this example are read "r" and write "w", while subjects
and objects correspond to places in the security lattice. We note in passing
that the integrity model has not previously been extended to an integrity
lattice (although this extension is immediately evident from the
security lattice because of the duality of the integrity and security
models). We may denote the relation "A can read B" by "A r B" and the
relation "A can write B" by "A w B".
The formal rule for the security lattice policy is that a
subject "S" may read an object "O" only if S is a security SUP of O,
and S may write O only if S is a security INF of O. The formal rule
for the integrity lattice is just the dual; S may read O only if S is
an integrity INF of O, and S may write O only if S is an integrity SUP
of O.
We note that because of the definitions given for the security
model and the lattice model, there is no mechanism provided to prevent
writing of higher level objects by lower level subjects. The lack of
integrity restriction in the security model, and the corresponding
lack of security restriction in the integrity model, are often
countered by the use of a "discretionary" access control policy which
allows subjects control over rights not explicitly restricted by the
security or integrity policy [Denning82]. Although this may be
of practical value in many cases, the only "real" restrictions on the
flow of information are embodied in mandatory policies.
A next logical step might be to incorporate the integrity
model restriction of "no write up" in the security model to allow
information to be read from below, but not written to above. The
problem with this policy is that an effective "write up" can be
performed if there is ever a "read down", since the "read down" might
allow a Trojan horse [Fenton] to be placed at the higher level.
The Trojan horse might read a particular low level object that
describes objects to be read down, and thus effectively written up.
In effect, we can generalize the "read" and "write" rights "r" and "w"
to a single "flow" right "f" where:
Preventing illicit dissemination and modification of information
clearly calls for a policy that combines security and integrity. The
combination of security and integrity policies of the sorts given above,
results in the partitioning of a system into closed subsets under transitivity
[Cohen84]. This partitioning is necessary in order to prevent global
information flows.
We will now use access matrices to graphically demonstrate
properties of interest to our studies. Although the solutions we
show are for specific cases, they reveal general properties that
are not necessarily self evident.
We begin with the matrix for the security and integrity models
whose access conditions were stated earlier, and their combination in the
case where security and integrity levels are identically divided. This
is shown graphically in figure 3:
Another way to present this information may be
used interchangeably when applicable [Cohen84], and the case from
figure 3 is represented in figure 4. The property made clear by this
example is that the combinations of the security and integrity models
leads to a system that is closed under transitivity, and at best
limits the spread of integrity corruption and/or security leaks to a
closed subset of the system.
A similar analysis can be used to demonstrate that, if a
security lattice is combined with an integrity lattice such that
security and integrity relations are identically aligned, isolationism
results. We show this for an example in figure 5 (the previous lattice
example with subjects a, b, and d removed):
Cases where security and integrity levels are not aligned,
also tend towards isolationism, as is shown in figure 6.
The "combination" of subjects, is a case where distinct
subjects are combined from the point of view of the security or
integrity policy as if they were a single subject. Thus any right
given to one subject in a given model is automatically granted to the
other. If we allow alignments to vary by combining sublattices of
otherwise identical security and/or integrity structures, we achieve
systems in which dissemination and corruption are limited to subsets
of the system that are closed under transitivity. We show examples
using the lattice from figure 5 above in figure 7 below, where e and f are
combined in the integrity lattice, and where g and h are combined in
the security lattice.
Notice that in the former case, since e and f are incomparable
in the security domain and have identical SUPs, no effect is achieved
by combining their integrity. In the latter case, g is given flow
access to h. The resultant structure may be shown as a directed graph
as in figure 8.
We stated earlier that information can be communicated to the
transitive closure of information flow starting at its initial source.
Given an access matrix of the type shown above, we can compute an
effective access matrix which tells us the potential information
effects of subjects on other subjects under transitivity. A simple
example is given in figure 9. This result is not likely to be
predicted by a typical security administrator, and automated tools for
evaluating access matrices to generate equivalent effective matrices
may be quite useful. Efficient algorithms for this evaluation are not
hard to find.
To see the above conclusion more clearly, we follow a simple
series of steps as follows:
We conclude from these demonstrations that the access matrix
is a useful tool for evaluating the effect of simultaneously using a
security and integrity policy, that the combination of these policies
tends to partition systems into closed subsets under transitivity, and
that the transitive nature of information flow has far ranging effects
on the security and integrity provided by a protection system.
We have just seen that the most general form of access control
allows so much freedom to an administrator that seemingly sensible
policy decisions may have unexpected, and potentially catastrophic,
effects on the actual protection provided. The mathematical structure
of the security and integrity lattices guarantees that information
flow is limited, and thus that inauspicious administration cannot
cause global access as in the last example. Unfortunately, this
combination tends to produce situations where isolationism result, and
this may be too severe a restriction for a desired level of communication.
Furthermore, within a given level and compartment, we may desire
additional flow limitation.
There are three basic remedies to the above situation [Cohen84].
One remedy is to limit the functionality of the system so that
information may not be used in a sufficiently general manner as to
have transitive effects. This solution is infeasible for almost any
general purpose machine, and little is known about the degree of
limitation necessary to prevent transitive information effects. A
second remedy is to limit the transitivity of information flow by
keeping track of all subjects that have effects on objects
and restricting certain sets of subjects from effecting certain sets of
subjects. This solution is difficult to implement, tends to move a
system towards isolationism if imprecise implementations are used, and
in order to be precise requires an NP-complete implementation. The
final remedy, and the one we will now consider, is to find a
mathematical structure that is more general than lattices, and yet
which maintains sufficient limitations on information flow to prevent
the all consuming transitivity that arises in the most general case.
We begin by specifying the information flow relation "f". We assume
transitivity of the flow relation, and thus that pairs (and sets) of
subjects with mutual flow are equivalent. We collapse each
equivalence class into a single subject, and get an antisymetric
transitive binary algebra.
We note that in a structure where equivalence classes
collapse, information in two non identical equivalence classes A and B
can not be related so that ((A f B) and (B f A)) since this would make
A and B identical by antisymetry. Furthermore, there can be no
structure in which information flowing from A to B can reenter A since
this would mean that (A f B) and (B f A) (by transitivity), and thus
that A and B (and all other elements of this ring) are equivalent.
Thus, we have a relation "<" such that A < B iff (A != B) and (A f B).
We note that if there is a subject "b" so that not(b f b), then in all
cases where there is a subject "a" so that a < b and a subject "c" so
that b < c, we may eliminate subject b, and use instead, a < c. Thus we can
systematically eliminate any such subject from the structure without
changing the effective information flow behavior. We conclude that
the structure of interest is a reflexive, transitive, antisymetric,
binary relation, commonly called a partial ordering, and that this
seems the most general structure we can use to guarantee restricted
information flow.
Figure 10 exemplifies this structure graphically where flow is
directed from left to right. Notice that the difference between this
and previous structures is in the lack of a SUP or INF for each pair
of subjects. For example; a and b have no INF, so no subject can
effect both; j and k have no SUP, so they cannot both effect any other
subject; g and c have no SUP and no INF, so no single subject can
either effect both or be effected by both; and i and j have both a SUP
and an INF, so that subjects a, b, e, d, and f can effect both i and j, and
subjects p and q can be effected by both i and j.
We note here some of the results that can easily be attained
from a POset by using figure 10 as an example. The effective POset is
formed by applying transitivity to information flow, and is more
easily displayed in a matrix form. This answers the question of
reachability immediately without undue complexity to the observer.
We call the effective POset a "Flow Control POset" (FCP).
The FCP corresponding to a portion of figure 10 is given in
figure 11 below. Subjects can always be labeled so as to produce an
upper triangular FCP matrix, since if there is no reordering
of a non upper triangular matrix to an upper triangular matrix, there
must be two entries "x" and "y" that are equivalent since (x f y) and
(y f x). Every upper triangular boolean matrix maps into a POset, but
not all upper triangular matrices map into an FCP. Finally,
we note that completely independent subsets of a system can exist
within a partial ordering as in figure 10, and that many distinct, yet
equivalent, FCPs can thus exist.
The effects of subject collusion can be easily determined by
ORing rows of any set of colluding subjects to find their effective
joint flow. As examples, the effects of; c, d, and g colluding; and
of a and b colluding; are given in figure 12. We quickly see that a
and b can collude to effect the entire example; while c, d, and g only
have limited collusive effect.
We note that the POset is really a "classification scheme"
in this case, just as the Bell-Lapadula and Biba models are
classification schemes. We may in practice have equivalent subjects
in an actual system, but we must be aware of the fact that they are
equivalent from a standpoint of our flow restriction model.
We now consider the uses of transitivity, subject equivalence,
and collusion calculations, and their effect on configurations of a protection
system of the sort just considered. By way of example, we consider
protection in secure computer networks [Cohen85] based on
distributed domains. As our first example, we consider the network
given in figure 13.
This network is comprised of 4 "Trusted Computing Bases" (TCBs),
where a TCB is a computer system that has been proven to maintain the
information flow policy specified by its access matrix. In this form
of network, cryptography and authentication are used to allow communication
through intermediate computers [Cohen85]. Each letter (A,B,...,K)
is used to represent a single user in the network. Security/Integrity
levels are denoted by the letters (S,C,U,N) at the left of the figure.
First we consider equivalencies between users. Since A,G, and
J are at the same security and integrity level, they can communicate freely
with each other. Thus (A f G) and (G f J) and (J f A), and by our
previous analysis, A=G=J. Similar analysis produces B=H=K, C=E=I, and D=F.
Note that since we are combining security and integrity identically, there
is no communications between levels.
If we now consider the effect of collusions under the so called
"Massive Takeover" attack, wherein an entire computer is taken over, we
see immediately that if TCBx is taken over, it compromises users
{A,B,C,D}. Their collusive effect compromises the entire network.
TCBy under this attack compromises {C,D,E,F,I}; TCBz compromises
{A,B,C,E,G,H,I,J,K}; and TCBw compromises {A,B,G,H,J,K}. It is
because of this severe ramification that compartments are important to
further restrict information flow within a network.
An example network with security, integrity, and compartments is
shown in figure 14. In this case, we consider the situation where
individuals are granted access to multiple levels and compartments in
different computer systems in the network (presumably under different user
IDs, as is commonly the practice). In this example; security/integrity levels
are (1,2,3); compartments are (X,Y), individuals are (A,B,C), and N is a
special "network" compartment that allows TCBs to communicate through TCBs
without an equivalent compartment [Cohen85].
In this case, (level 3,TCB1,A) can only effect (3,3,B), and
thus attacks based on a single user ID are quite limited in their effect.
Similarly, a massive takeover of TCB1 or TCB2 only grants half of the
network to the attacker. TCB3 takeover grants the entire network since
TCB3 has all compartments and all levels in the network. In practice,
this situation is to be avoided. Most importantly, we must consider
collusions of individuals. The following table summarizes the effects
of individuals and collusions of individuals on the network.
Note for example that A has nothing to gain by colluding with C,
while B and C can collude to attain full access. The concept of collusion is
very powerful in attacking and defending computer systems and networks, and
the limitation of the effects of collusion is critical to securing actual
systems against actual attacks. This also has ramifications in other
security domains such as preventing theft of goods in a warehouse. In the
distributed administration of secure computer networks, the ramifications
of protection system configuration may be quite unexpected to the
average administrator.
We call an indivisible modification of a protection system a
"move", and define a move as "valid" iff it produces a "valid"
configuration given the rules on configurations. Our analysis of
moves begins with restrictions on the rules for determining valid
configurations.
We now examine three different time analyses of a system designed
to enforce a security policy, an integrity policy, and a compartment
policy. The "quasistatic case" is the case where only the
configuration that results from a proposed move is of interest, and
effects of previous configurations are unimportant. The "universal
time case" is the case where effects of all past configurations are of
interest to the validity of the proposed move. In this case, we are
concerned with the lingering effects of corruption information and/or
the eventual dissemination of information. As a compromise, the
"window of time case" is the case where effects of a given span of
time is of interest to the validity of proposed moves.
As a baseline for the analysis of the quasi static case, we
perform the following procedure:
Since the verification process consists of testing the
resulting configuration against the set of rules in force, no move
that violates a rule will be accepted, and any move that does not
violate a rule will be rejected. The problem with this evaluation
technique is that there is a case where a sequence of independently
valid moves inadvertently allows information to flow where it should
not. As an example, if user A may be at level 1 or 2, user B may only
be at level 2, and user C may only be at level 1, users B and C may
communicate as follows:
We can see that if (B f A) and (A f C) were simultaneously
true, the rule based system would determine that because of
transitivity (B f C), and would disallow this because B and C are at
different levels. Since A may be at either level, (B f A) alone is
legal as is (A f C) alone. The problem we have here comes from the
effect of time on information flow. As an attempted solution, we can
simply not reflect the removal of flows in the evaluation
configuration. This scheme, in effect, remembers all previous
information flows, and only permits flow if there is no historical
flow that when combined with the current flow would result in illicit
flow. We use the following procedure for this solution:
The problem with this solution is that it is imprecise, in
that there are legitimate moves even in light of historical
information, that are considered invalid if we simply ignore all
cancellations of flows. An example is a sequence of moves as follows:
In this example, even though (A f C) and (B f A) are illegal
together, for all time ~(C f B) and ~(B f C), so no flow rule is violated.
The actual sequence of moves must be considered if we are to precisely
prevent historical information from causing illicit flows. If we are
to precisely track the time transitivity of information flow, we have
two choices; we can attempt to analyze all previous moves in sequence
to determine all flows that may have resulted in information
transiting into the requested flow recipient; or we can always
maintain a precise configuration that reflects the time transitive
information flow up till then, and augment it with every move.
Calculating time transitivity from the sequence of moves
can be done by forward evaluation of a "time flow configuration"
(TFC) which is formed by calculating all transitive flows into each
object for all moves as follows:
Our initial TFC is just the FCP of the initial configuration
since this comprises all of the potential flows into each object.
From this point forward, every move "(A f B)" we make, introduces the
possibility that a previous flow to A now flows to B. Thus, for every
subject "X" s.t. TFC(X,A) at the previous move, we must conclude TFC(X,B)
after this move. Similarly, all entries "X" s.t. TFC(X,B) at the
previous move are still in the historical flow to B after additional flow
to B is allowed from A, so TFC(X,B) is also true after (A f B). When
we add a flow (A f B), we must also add TFC(*,A) to FCB(B,*) to
include transitivity effects of A's history on the history of all
objects currently effected by B. Finally, we must conclude that since
(A f B), TFC(A,B) is true after the move. The last condition is implied by
TFC(X,A) => TFC(X,B) since we have the assumption that (A f A). This
entire calculation takes at most linear time and N^2 space in the
number of subjects, and is linear in the number of moves considered.
It is thus quite easy to perform if the FCP and TFC are maintained throughout
the history of the system.
The problem with using the TFC for limiting moves is that it
may become unduly restrictive as time goes on. Information aging, for
example, is commonly used to justify automatic declassification of
information, and a corresponding policy might be used to justify
removal of TFC flow restrictions. A "window of time" version of a
TFC can be generated by assuming that the initial configuration of the
system is the FCP configuration at the beginning of the window of time,
and computing the TFC using all subsequent moves. We must of course
remember all historical moves over the window of time, and must also
either keep historical configurations or a complete sequence of
historical moves from which we can recompute the FCP for the beginning
of the window.
By using the above mathematical basis, we can automatically
evaluate the FCP, equivalencies of subjects, and the
effect of collusions under a given configuration of a protection
system with a flow relation. We augment this basic capability with a set
of rules that determine whether a given configuration is allowable
given installation dependent parameters to form a configuration
evaluator tailored for a given system. We form a dynamic analysis system
by performing static evaluations on configurations resulting from
proposed moves, and reporting on the effects. Finally, we may
augment this capability with a set of inductive rules for proposing
moves that are likely to be acceptable to the protection system while
fulfilling desired information flow requests.
In an FCP system where classical protection models are
required, we form an assistant based on the security and integrity
models. We use the mathematical restrictions on communications under these
models as the rules for evaluation of configurations. A configuration is
acceptable only if these rules are not violated. Rules for evaluation
of collusions, determining FCPs, and limiting equivalencies
of subjects, can also be used to form more restrictive systems while
still maintaining security and integrity constraints. We can
assure that added rules do not violate previous rules by requiring that
all rules find the configuration under consideration acceptable before
it is accepted.
Compartments and other information flow restrictions may be
added to the rule based evaluation system by simply adding rules.
Since a rule based system (RBS) can be quite simple in design and
implementation, it should be relatively easy to prove its correctness using
automated theorem proof techniques already used for proving correctness
of secure operating systems. Once a basic RBS has been proven correct,
we need only prove that rules are correct for a given policy in order
to prove a given implementation correct. Security, integrity, and other
properties of systems are proven by proving that evaluations performed
by rules in the RBS are mathematically consistent with the specified
policy. Since these rules are themselves just mathematical conditions,
this mapping should be quite simple. Figure 15 shows the architecture
of such an RBS.
Since high level automatic decision making is submitted to the
RBS before acceptance by the system, we need not trust our induction
technique, nor prove its correctness in order to be certain that it makes
no illicit moves. Indeed, we can design the high level structures to
generate a multitude of suggestions, have these suggestions submitted to
the RBS, and use the results of evaluation to determine the utility of
inductive paths and to filter out invalid administrative suggestions.
We must be careful here, for there are several traps that the designer
of such a system may fall into. These traps spring from the mathematics
used to determine the policy of interest. For example, an inconsistent
set of rules may refuse to allow any modifications of the access matrix.
Certain rule sets may tend towards specific states of the protection system,
while others may prevent certain valid states from being reached from other
valid states.
In order for a set of rules to be of practical utility, we
must restrict them in at least some basic ways. If the set of rules
are inconsistent, then we will never be able to make a modification
because there will be no configuration for which all rules will be
able to agree. If the rules are incomplete, then we may have cases
where rules cannot produce a result, and this is clearly unacceptable
for the same reason. We restrict ourselves to a finite set of rules
since an infinite set of rules cannot be evaluated in a finite time.
Similarly each rule must be decidable so that decisions are made in a
finite time. Finally, we require that the rules reflect the desired
policy of the protection system for if they do not, they are of little
use. We note that many desirable policies are in practice
unattainable, and that we must restrict ourselves to attainable goals
if we wish to attain them.
A simple implementation of an assistant that maintains
security, integrity, and compartments, while allowing arbitrary
information flow controls within those restrictions may be formed by
implementing the following moves, and using the previously explored
techniques to validate resulting configurations:
To add an individual, we require the minimum and maximum security and
integrity levels, the set of compartments, and the maximal effect of the
individual are within system limits.
To add a given user ID for individual A, we need to know the
individual, the compartment, the security level, and the integrity
level for the given user ID, and must verify that these don't cause
the configuration to go beyond the allowable constraints on the
individual.
To add an information flow between from user ID Ax to user ID
By, we must verify that the flow doesn't violate security, integrity,
compartment, or effect constraints:
In order to remove flows, user IDs, or Individuals, we must
verify that these removals don't cause other rules to be violated, but
in terms of the effect on security, integrity, and compartments,
removal has no immediate effect. Note that an individual should
really never be removed as it is sufficient to remove all IDs for that
individual in order to eliminate all information flows. We must be
careful to remove all information flows to and from an ID from the
flow matrix before claiming its deletion, and to remove all IDs for
a given individual before claiming the individual is deleted.
Although considerable mathematical work is still required to
investigate underlying policy issues for static and dynamic configurations
of protection systems, a simple automated administrative assistant of
the sort shown above is a significant step towards eliminating errors in
the administration and configuration of secure computer systems and networks.
A system of this sort is currently being prototyped for administration and
configuration of secure computer networks with security, integrity, and
compartments, and further developments along these lines are expected to
follow further mathematical developments.
We have shown by a series of arguments that the structure of
preference for describing and the analyzing security and integrity
policy of systems and networks is the POset. We have demonstrated the
difficulty with more general structures in terms of their obscuration
of the ramifications of administrative decisions, and have shown the
inadequacy of less general systems for describing many desired situations.
We note here the difference between our discussion and a very
similar discussion in [Denning82]. Denning states that there is
no loss of generality in the use of a lattice structure since we can
transform any information flow structure into an equivalent lattice
structure. The first problem with this transformation is that it not
trivial to see, given a particular access matrix, what the resulting
lattice will look like. This tends to obscure the real meaning of the
information flows being permitted from their specification in an
access matrix. The second problem is that we are required to have a
SUP and INF in our model which has no physical meaning in a computer
network. The reason for having a SUP and INF seems to be purely for
the elimination of cases in the analysis where no SUP or INF exist.
This aspect of the analysis is important in the administration of a
secure system as we will demonstrate presently. Finally, we note that
our derivation of the POset comes from the desire to merge and
generalize the security and integrity models, and that the fact that
they lead to a POset is in itself important to their understanding.
Further work on the use of POsets to describe protection
mechanisms will probably include theoretical limitations on the use of
POsets for distributed security administration, and extensions of the
automated administrative assistant presented herein. The use of POsets in
conjunction with secure network design criteria [Cohen85] will
likely result in a design with distributed administration and automated
administrative assistance. Mathematical analysis of dynamic configurations
of protection systems are likely to produce a more comprehensive theory
of information flow analysis which will have ramifications to many other
fields.
Bell, "Secure Computer Systems: Mathematical
Foundations and Model", D. E. Bell and L. J. LaPadula, The Mitre
Corporation, cited in many papers, 1973, Bell
Biba, "Integrity Considerations for Secure Computer
Systems", K. J. Biba, USAF Electronic Systems Division, Biba, cited in
Denning82, 1977
Denning82, "Cryptography and Data Security",
D. E. Denning, Addison Wesley, 1982, Denning82
Denning75, "Secure Information Flow in Computer
Systems", D. E. Denning, PhD Thesis, Purdue Univ, W. Lafayette, Ind.,
May, 1975, Denning75
Harrison, "1976", M. A. Harrison, W.L. Ruzzo, and
J.D. Ullman, ACM, Harrison, Protection in Operating Systems,
Proceedings
Fenton, "J. S. Fenton", Fenton, Information
Protection Systems, U. of Cambridge, 1973, Cited in Denning
Cohen84, "Computer Viruses - Theory and
Experiments", F. Cohen, DOD/NBS, 7th Security Conference, Sept, 1984,
Cohen84
Cohen85, "A Secure Computer Network Design",
F. Cohen, IFIP, Computers and Security, March, 1985, Cohen85
A Security Lattice Corresponding Access Matrix
Objects
[a] a b c d e f g h
/ \ S a rw r r r r r r r
[b] [c] u b w rw - r - - r r
| / \ b c w - rw - r r r r
[d] [e] [f] j d w w - rw - - r r
\ / / e e w - w - rw - r r
[g] / c f w - w - - rw - r
\ / t g w w w w w - rw r
[h] s h w w w w w w w rw
(a f b) iff [(a w b) or (b r a)].
Some Simple Demonstrations
Security Model Integrity Model Combined Model
h +1 n -1 l h +1 n -1 l h +1 n -1 l
-------------- -------------- --------------
high f - - - - f f f f f f - - - -
n+1 f f - - - - f f f f - f - - -
n f f f - - AND - - f f f = - - f - -
n-1 f f f f - - - - f f - - - f -
low f f f f f - - - - f - - - - f
----- ----- -----
n+1 |///| |\\\| |XXX|
n | | + | | = | |
n-1 |\\\| |///| |XXX|
----- ----- -----
\\\ = no write /// = no read XXX = no access
Lattice Security Lattice Integrity Lattice Resulting Matrix
c e f g h c e f g h c e f g h
[c] c f - - - - c f f f f f c f - - - -
/ \ e f f - - - e - f - f f e - f - - -
[e] [f] f f - f - - AND f - - f - f = f - - f - -
/ / g f f - f - g - - - f f g - - - f -
[g] / h f f f f f h - - - - f h - - - - f
\ /
[h]
Integrity Security Combined
[a] [a] [c]
/ \ AND \ / = [a] [b] [c]
[b] [c] [b]
a b c a b c a b c
a f f f a f - - a f - -
b - f - AND b f f f = b - f -
c - - f c - - f c - - f
Security Lattice Integrity Lattice Combined Lattice
c e f g h c e f g h c e f g h
c f - - - - c f f f f f c f - - - -
e f f - - - e - f f f f e - f - - -
f f - f - - AND f - f f f f = f - - f - -
g f f - f - g - - - f f g - - - f -
h f f f f f h - - - - f h - - - - f
c e f g h c e f g h c e f g h
c f - - - - c f f f f f c f - - - -
e f f - - - e - f - f f e - f - - -
f f - f - - AND f - - f - f = f - - f - -
g f f f f f g - - - f f g - - - f f
h f f f f f h - - - - f h - - - - f
Integrity Security
[c] [c]
/ \ / \ [h] --> [g]
[e] [f] AND [e] [f] =
/ / | / [e] [f] [c]
[g] / [gh]
\ /
[h]
An Access Matrix Effective Equivalent
a b c d e f g h a b c d e f g h
a f - - - f f - f a f f f f f f f f
b f f - - - - f - b f f f f f f f f
c - f f - - - f - c f f f f f f f f
d f - f f f - - - d f f f f f f f f
e f - f - f - - f e f f f f f f f f
f - - - f - f - f f f f f f f f f f
g f f - - - f f f g f f f f f f f f
h f f f - - f - f h f f f f f f f f
(a f a) and (a f e) and (a f f) and (a f h) ;given
(h f b) and (h f c) and (f f d) and (b f g) ;given
(a f h) and (h f b) => (a f b) ;conclusion
(a f h) and (h f c) => (a f c) ;conclusion
(a f f) and (f f d) => (a f d) ;conclusion
(a f b) and (b f g) => (a f g) ;conclusion
thus (a f *) ;a flows to all
(a f a) and (b f a) and (d f a) and (e f a) ;given
(g f a) and (h f a) and (c f b) and (f f d) ;given
(c f b) and (b f a) => (c f a) ;conclusion
(f f d) and (d f a) => (f f a) ;conclusion
thus (* f a) ;all flows to a
(* f a) and (a f *) => (* f *) ;global communication
More General Mathematical Structures
(S,{f}):
(a f b) and (b f c) => (a f c) ;transitive
(a f b) and (b f a) => (a = b) ;antisymetric
(S,{f}): for all a,b,c in S,
[(a f a) ;reflexive
and (a f b) and (b f c) => (a f c) ;transitive
and (a f b) and (b f a) => (a = b)] ;antisymetric
a--c-----h--k m
\ / / \
b--d--f--i--l--n p--q
\ / \ /
e--g j-----o
r--t--v--x--y
\ / \
s--u--w-----z
a b c d e f g
a f - f f - f -
b - f - f f f f
c - - f - - - -
d - - - f - f -
e - - - - f f f
f - - - - - f -
g - - - - - - f
c, d, and g collude a and b collude
a b c d e f g a b c d e f g
c - - f - - - - a f - f f - f -
d - - - f - f - b - f - f f f f
g - - - - - - f ---------------------
--------------------- = f f f f f f f
= - - f f - f f
Additional Considerations
TCBx TCBy TCBz TCBw
----- ----- -----
S | A | | G |<->| J |
----- ----- -----
C | B | | H | | K |
----- ----- ----- -----
U | C | | E |<->| I |
----- ----- -----
N | D |<->| F |
----- -----
TCB1 TCB2 TCB3
--------- --------- -------------
3 | N | A | | N | A | | N | B | C |
--------- --------- -------------
2 | N | B | | N | C | | N | B | A |
--------- --------- -------------
1 ---| N | A | ---| N | B | ---| N | C | A |
| --------- | --------- | -------------
---------------------------------
X Y X Y
Individual(s) (Compartment,Level) pairs effected
A (X,1),(X,3),(Y,1),(Y,2),(Y,3)
B (X,2),(X,3),(Y,1)
C (X,1),(Y,2),(Y,3)
A+B everything
A+C same as A
B+C everything
A+B+C everything
The Effects of Time on Flow Control
assume the proposed move
verify the validity of the resulting configuration
accept or reject the move
(B f A) ;information may move from B to A
... ;and does as time passes
(B ~f A) ;B may no longer flow to A
(A f C) ;information may now flow from A to C
... ;B's information transits to C
assume the move
verify validity using historical configuration
accept or reject move
(A f C) ;information may flow from A to C
... ;and does as time passes
(A ~f C) ;A may no longer flow to C
(B f A) ;information may now flow from B to A
... ;and does as time passes
TFC at system initialization (move 0)=
for each object "A"
TFC(B,A) iff (B f A) in FCP at initialization
for N>0, TFC at move N "(A f B)" =
for all X s.t. TFC(X,A) at move N-1
TFC(X,B) at move N
AND for all X s.t. TFC(X,B) at move N-1
TFC(X,B) at move N
AND for all X s.t. FCP(B,X),
for all Y s.t. TFC(Y,B) at move N,
TFC(Y,X) at move N
AND TFC(A,B) at move N
Automatic Administrative Assistance
--------------- -------------
|Administrator|<------->| Induction |
--------------- --->| Method |
| -------------
| --------------
------------- --->| Rule Based |<---Rules
| Data Base |<------>| System |<---
------------- -------------- |
--------------- |
|Access Matrix|<--
---------------
Add-individual A (min-sec,max-sec, min-int,max-int,effect,comp,comp,...):
Min Sec A >= Min Sec System
Max Sec A <= Max Sec System
Min Int A >= Min Int System
Max Int A <= Max Int System
Max Effect A <= Max Effect System
Comp A SUBSET Comp System
Add-ID Ax (sec,int,comp):
Min Sec A <= Sec Ax <= Max Sec A
Min Int A <= Int Ax <= Max Int A
Comp Ax ELEMENT Comp A
Effect A <= Max Effect A
Add-flow (Ax f By):
Sec Ax <= Sec By
Int By <= Int Ax
Comp Ax = Comp By
Effect A <= Max Effect A
Summary and Further Work
References