A Complexity Based Corruption Detection Mechanism
Fred Cohen
Copyright (c) Fred Cohen 1985

keywords: cryptography, integrity, program protection, built in self test, fault tolerance

Abstract

This paper addresses the problem of detecting integrity corruption in a system without "built-in" protection mechanisms. Since no built-in protection is provided, it is incumbent on the programs and data within the system to protect themselves. Any protection mechanism not based on making violations very complex may be trivially attacked, so a complexity based mechanism is desired.


Introduction

In a system with multiple users, shared information, and general purpose functionality, integrity corruption by viruses [1] and other integrity corrupting mechanisms is possible. Since this sort of functionality is generally considered useful, it is desirable to find a means by which the integrity of information may be maintained when these properties are not restricted.

This paper examines a method of "self defense" in which each program attempts to protect itself (and perhaps other information) by using self knowledge to detect illicit modification. It is likely that if timely detection is possible, redundancy (e.g. backup tapes) may be used to correct corruption.


The General Method

The basic idea is to demonstrate a technique which causes the complexity of finding a systematic way to create undetected corruption to be very high, and the probability of causing such a corruption to be very low.

Our general method is to use a large set of self test techniques, which can be placed in a large number of ways throughout a system, and which rely on a difficult to forge cryptographic checksum for detecting illicit modification, while still allowing legitimate modification. The argument for this general method is as follows:

The remaining problem is to find a mathematically justifiable technique that fits all of these criterion.


Fundamental Limitations

Before suggesting a specific method, we wish to consider the fundamental limitations inherent to the suggested general method. In the cases of finding classes of tests and adequate cryptosystems, the problems are not uncircumventable, and general and acceptable specific techniques are proposed later in this paper. In the case of test placement, there seem to be some rather severe problems. The problem of self test in a system that allows legitimate modification appears to be difficult as well.

The Class of Tests

Commonalities in tests might be exploited to try to detect the presence of a test in a given location. The idea is to come up with a sufficiently large set of tests which can be stored in a sufficiently large number of forms to make detection sufficiently hard. A technique that makes test detection undecidable would be very nice, but we might be willing to settle less. Note that nearly any commonality may be used for detection since the probability of a given sequence being from a random other program decreases very rapidly with the length of the common sequence. This is clear from information analysis of software in both source and compiled form, but need not be the case if systems are designed for space minimization.

The Placement of Tests

Tests can be placed anywhere in the system where they will:

If we place these tests in areas that are not interpreted as program, but rather as data, they will likely never be executed and result in the corruption of information. It is therefor important that they be placed in interpreted information and that they act independently from the state information used by this information in its normal activities.

Unless we partition the information being used as data from that being used as program, we cannot guarantee that a program will not examine its own contents and or modify itself in the course of its legitimate behavior. If we try to partition data from program, we cannot be guaranteed that we will be successful unless we restrict the system's functionality, for with general purpose functionality, there is no distinction between information used as program and information used as data except in its interpretation. This is most clearly seen in the case of an interpreter (such as Basic) which allows information modified as data by an editor to be used as program when interpreted by the Basic interpreter.

This would seem to imply that placement depends upon knowledge of the intended use of information, and that general purpose programs cannot be perfectly protected. Since any general purpose program can be made to act like a Turing machine, any data entered by the user can be interpreted as a program. Since we cannot rely on that program to preserve the integrity of its own data, and since we cannot know a-priori how that data may be used, we probably cannot do any better than to protect programs and data cooperating with the scheme.

We may require that data which is to be modified with integrity must be modified by one of a given set of programs. We may be able to design a compiler that forces checks on the integrity of data files as well as the set of programs able to legitimately access them. The only remaining problem is the placement of these checks within these programs.

If we place tests in the beginning of programs, or at any standard place, they may be easily circumvented by appropriate modification of the code which tests for integrity. An alternative is placement at an arbitrary place, or perhaps more appropriately at one or many of a large set of places within the program. Since determining which section of code is the test may be made arbitrarily difficult, this offers some hope, but we must also consider that the placement of this code such that it is not executed in every use of the program, reduces the probability of detecting a corruption before it spreads transitively, to that of executing the detection algorithm.

The placement of the code in each branch of a program may be quite cumbersome, and it guarantees an attacker that some test is placed in every branch. This may or may not be of aide to the attacker, and may or may not be so burdensome as to make the system impractical. Another alternative is to evolve the program so as to include the test, or to evolve the test so as to include the program. In any case, the evolution of programs in this way has received little attention in the literature, but it appears to be both feasible and difficult to disentangle [1].

The Cryptographic Checksum

The best we can do in a system which protects itself with complexity is make the probability of forgery and the difficulty of breaking the code in a given amount of time arbitrarily low. We do this by using a "one way" function which allows us to transform into the cryptographic checksum in order to test the program for modifications, but which doesn't allow us to generate a program that produces a valid checksum. We must be careful that the function is not only one way, but that there are a sufficiently large number of keys available, and that the key used for generating the checksum cannot be used to invert the function.

We suggest a "public key" cryptosystem in which the private key is destroyed unrecoverably at the creation of the checksum. This prevents the possibility of finding that key and using it to generate a new and valid checksum for an invalid program. It also allows us to leave the key publicly accessible (although hidden along with the rest of the self test code) without fear of its eventual discovery and exploitation.

Modifiability

Let us now suppose that a legitimate program legitimately modifies information in a data file associated with one other legitimate program. In order for this change to be considered legitimate by other programs, each must be convinced of the legitimacy of the program making the modification and of their own legitimacy. If other programs are to access the data, each must modify its self to reflect changes in data files. Since each has now been modified, each must verify that the others' modification was legitimate, and must again modify its self to reflect the new modification of each other. Since they test each other, this procedure must be repeated until either a stability point is reached or indefinitely.

If a stability point is reached, this means that a modification in one of the programs does not require a change to its cryptographic checksum, and thus the checksum for both the legitimate and illegitimate versions are identical. If it is extremely unlikely for this to happen, this will only happen after a very long time if at all, and if it is likely, than it is also likely that an attacker's change would be thought legitimate. What this seems to indicate is that a strict limitation of the testing of programs and data by each other must be enforced in that we must not form a loop of interprogram tests.

In other words, if all programs are modifiable, at least one program must have sole responsibility for testing itself, and all other related programs must only perform tests on each other in a semi-lattice form with the self testing program at the "sup". This can be relaxed if we limit the legitimately modifiable portions of the system so that their modification is supervised by legitimately unmodifiable programs. Unmodifiable programs can test each other with mutual testing loops.

In cases where programs do not share modifiable data with other programs, data may also be tested. For cases where sharing of data is important, we can use a single data access program which is tested by all sharing parties, and which has complete control over the modification of all shared data. This program can then use internal tests on all stored data, and thus shared data can be tested without the looping problem. The resulting mathematical structure is a partial ordering with shared data residing only in semi-lattice substructures. For high assurance, increased mutual testing may be used.


A Specific Method

A specific method specifies a class of tests, a means by which they may be placed throughout the system, a checksumming method, and a modification method, all satisfying the above criteria.

The Class of Tests

An arbitrarily large number of programs can be written to generate and compare a given set of data with a stored value by starting with a fairly simple evolutionary program [1], and creating a large number of evolutions. It is in general undecidable to determine whether or not two evolutions are equivalent. This seems a promising leaping off point for automatically developing a set of tests from a single test. If additional safety is desired, a large number of versions of the self test algorithm may be used in conjunction with evolution to guarantee that even if a given case were thoroughly broken, other cases would exist.

An intriguing variation on this theme for use with the RSA [3] cryptosystem (which is suggested below) is the generation of a special purpose exponentiation algorithm for each of a large number of RSA keys. Since each exponentiation produces a slightly different algorithm [2], each test program will be different. This can of course be augmented by the use of evolutionary techniques to make each version of the test very difficult to detect. In addition, this prevents attacks in which the checksum for a given set of information is performed by the attacker, is searched for in the machine state, and is modified to fit the desired checksum for corrupt information. Since an attacker cannot easily determine what information belongs to the test program, and the key itself isn't even stored (only an algorithm for computing the effect of its use is actually kept), there is no known way to tell which key is being used.

The Placement of Tests

We suggest a lattice structure of testability in which all programs test themselves, and some programs test each other. When information must be modified or shared, we suggest an independent program through which all modification must be performed, and which is an 'inf' to all programs with access to the shared data, and a 'sup' to all data shared by them. This allows each program to independently verify the propriety of the modification program.

One placement of tests is done by a special purpose compiler which has sufficient knowledge about the programs to allow a relatively small number of tests to be placed at any of a relatively large combination of places within the program. Programs will likely have to be restricted in some ways (e.g. no self modification), and all data files used by programs and all sharing behavior will have to be specified at compile time.

A second test placement strategy is the generation of a test algorithm, and the incorporation of the program to be tested along with a number of irrelevant sequences of instructions within it. The value of the resulting checksum is computed based on all but the final checksum value, and this value is placed in a location determined at test generation time. Since each test algorithm is different (below), each program will have a differently placed checksum. Additional code strands may make it difficult to disentangle independent subsequences of the resulting code into test procedure and program.

Although specific algorithms are not yet available for this purpose, their development appears straight forward from previous work in evolutionary programs [1].

The Cryptographic Checksum

The following cryptographic protocol for creating difficult to forge checksums appears to be sufficient for the desired conditions.

Note that since the inverse function is not available, it is infeasible to attempt to generate blocks of plaintext which correctly checksum to any given value. This prevents the attack where a forger forms any desired number of blocks of arbitrary information, encrypts each with the known public key, determines what the last block must checksum to in order to make the final checksum come out right, and then generates a block which checksums to the appropriate value to compensate for the forged blocks' incorrect values.


A Simple Variation for Software Protection

The above technique is quite complex, may suffer from poor performance, and may leave a lot to be desired in the general case. In the domain of software protection, a major difficulty is preventing modification of a program for resale under a different name. This simplified variation resolves much of the complexity of test placement within a program by distributing the integrity protection throughout the program so that each routine protects itself from both analysis and modification.

The basic idea is to encode each subroutine so that only it knows how to decode itself into a standard memory area. Since each routine can be made sequential and all execution strands can be kept track of for small enough program segments, the placement of tests within a routine may be made reasonable, and tests may be interleaved with program. When a subroutine is called, it decodes itself into a standard memory area, thus overwriting the previously decoded subroutine in that area. Data shared by subroutines may be decoded once at initialization, and stored in a common area for manipulation.

Since only a small portion of the program is in plaintext at any given moment, many "snapshots" must be taken in order to expose a significant amount of the program. Since each routine is designed to run in the same memory locations, absolute addressing is possible, and relocation of the program thus causes operation to fail. A trace of execution would be needed to determine relative calling sequences, and the problem of determining when decryption ends and execution begins may be quite difficult.

Each routine can be designed to test other routines in their stored form before calling them for execution (in a semi-lattice structure), so that the replacement of a routine is detected by other routines. Since stored routines are unchanging, mutual testing loops may be incorporated where desired. Each routine can also be evolved so as to test itself.

Although this technique does not appear to be as strong as the more complex method, it may prove sufficient for many applications, and further improvement may allow it to be of widespread utility.


Conclusion

The first method appears to be ample for the intended purpose, but it suffers from slow performance in practical use, a very limited domain of applicability, and very difficult self test placement problems. The complexity of detecting and locating a given test appears to be very high. The probability of finding a systematic forgery technique in a given amount of time is at least as low as the probability of breaking the RSA [3] cryptosystem in that amount of time. The probability of creating undetected information corruption can be made arbitrarily small by using sufficiently long keys. It thus appears that this technique is sufficient for some purposes, and that a compiler that produces 'self defending' code may be practical.

The use of the later method in preventing illicit modification and resale of copyrighted software may be practical, although it does not prevent reuse in the original form. This allows the copyright notice to be forcibly maintained as long as the program operates, and may aide in the detection and prevention of copyright violations.

Both methods offer hope for preventing illicit modification of information, and thus of improving the integrity of software and data stored in computer systems. It is hoped that further work will lead to the practical maintenance of integrity in future systems.

We note that a sufficient amount of corruption can always prevent the detection of the corruption by self test techniques. With these techniques, it is expected that such corruption would prevent operation of programs, and thus the corruption would be trivially detected by the user as denial of services. These techniques only prevent corruption from going undetected.


Further Work

Improvements to the techniques above may afford a more reasonable means of protecting information from modification, and may allow a run time implementation of self test for data files.

The use of semantic information in conjunction with syntactic information in the storage and retrieval of information may make this possible. This is (in essence) the effect of having a limited set of programs able to modify data. The modification programs comprise the semantics associated with the data.

Evolutionary algorithms for interleaving programs are only in their infancy, and much work in this area is expected. Close ties are seen here to biological systems, and a mathematical theory of evolution would be an intriguing work in both domains.

Error detection is sufficient for detection of integrity corruption, but does not allow the correction of errors. Coding theory indicates that error correction should be possible if enough redundancy is used, and little enough corruption is performed to allow this redundancy to act properly.

The second technique for integrity maintenance touched on an interesting area called generative program protection. This area is based on the idea that programs can be designed so as to generate code which actually performs the desired function. This is very similar to the genetic code with which DNA produces living beings. It is thought that the complexity of determining a valid genetic modification to a complex organism is extremely difficult. This is the reason that genetic engineering is yet unable to design a human being to specifications.

Hardware assisted program protection is also possible. If we back away from our assumption that everything is subject to illicit modification, and assume rather that only a very limited amount of the system is protected from corruption, we may be able to apply these techniques in such a manner as to remove all of the remaining problems.


References @begin(verbatim)

[1] F. Cohen, "Computer Viruses - Theory and Experiments" IFIP-Sec Conference Proceedings July, 1984, also in 7th Conf. on Comp. Sec. Aug. 1984

[2] D. Knuth, "Seminumerical Algorithms" Section 4.3, Addison-Wesley Publishing Company, 1969

[3] R. Rivest, A. Shamir, L. Adleman, "A Method for Obtaining Digital Signatures and Public Key Cryptosystems", CACM V21#2 (Feb. 1978) PP120-126

[4] D. Chaum, "Advances in Cryptography 83", Plennum Press.