3.1 - A Short History of Operating Systems

Copyright(c), 1990, 1995 Fred Cohen - All Rights Reserved

Resource allocation has surely been around since animals had to choose between hunting and protecting territory and young. People started using systems to allocating resources in very ancient times. Ancient methods of crop rotation and saving grain for leaner times are all examples of these techniques. Ever since there have been lines for services, the number of servers dealing with customers have been the subject of resource allocation.

In early mechanical devices and eventually in the early days of computers, machine time was allocated through the use of sign-up sheets of various sorts. The principle was that two users could not use the same machine for different tasks at the same time. Thus the sign-up sheet was an early form of protection that allowed multiple users to share a resource.

Certain operations were given priority over others, and could usurp time as needed. In the days of batch processing, each user would submit jobs to an operator who would put them into first-in first-out queues of differring priorities. Internal funds (commonly called funny money) were allocated to users so they could decide whether faster service was more important on some occasions than the total amount of time available for their use. As disk storage began to replace tape and card storage, disk space was allocated to different users and charges were used to determine allocations. Again, the OS was called upon to prevent erroneous programs from infringing upon other users' information, and automated protection become more important.

As operating systems developed, timesharing was created to allow many users to use the same computer over the same period of time. In a timesharing system, the computer's operating system effectively schedules each user's tasks so that they operate for a "time slice". A time slice is usually about 1/60 of a second, but can vary from system to system and under various conditions. As the switching of contexts became automated, the OS was again called upon to keep information from processes from reading or writing into other processes areas of memory and disk. The line printer, card reader, magnetic tapes, and many other devices could not be timeshared every 60th of a second, or all of the output would be garbled, so protection of a different sort was used to allocate this resource.

As the hardware became more varied and complex, OSs took on other tasks that inconvenienced the normal programmer unnecessarily. For example, early programmers had to deal with the physical location of each bit stored on each disk. The protection system then had to determine whether a particular disk location was accessible by that programmer before allowing access to take place. Advances in hardware designs allowed these operations to take place very quickly, and soon the hardware allowed the operating system to operate in a protected part of memory so that other programs could not accidentally or purposely interfere with its proper operation. The interface between operating system and user program began to become the dominant method for accessing physical devices, and eventually, user programs were no longer used to access physical devices. This freed the operating system from having to check on every disk access, and allowed the programmer to ignore low level details of how each device worked in favor of designing better methods for performing computations.

As OSs developed, they began to abstract out the details of how the devices operated in favor of presenting a uniform interface to the hardware accessible through OS calls. The user program calls the OS and requests a service, which the OS provides. The user could then write programs that would run on many different configurations of the same machine, or even on different machines, without any changes to the program itself. Along with abstracting out details of operation, OSs provided more abstract forms of protection. Groups of users could be identified by name and granted access to sets of information called files. Directories could be protected from prying eyes, and complex databases could be developed with protection at the record level.

Certain policies have come about as a result of theoretical limitations on our ability to control rights and historical needs of the military. As computer system manufacturers became widespread, protection mechanisms were often included to keep OSs from being accidentally violated. As military requirements became more pronounced and theoretical limitations became better understood by university and industrial researchers, the military began to commission industrial production of more and more secure machines.

In early systems, nearly all security was based on physical access restriction and separation of users from each other in time. The time separation was imposed more by the fact that early OSs consisted of sign up sheets than by security considerations. The concept carried over, and the physical and time separation techniques are still in prominent use by the US military [Baker83] .

As OSs improved, many users were able to share facilities. The computer security problem in OSs began to depend on the delicate balance between protection and sharing [Shankar77] . The major security considerations in these systems were the user environment, external protection mechanisms, internal protection mechanisms, information to be protected, hardware, OSs, and reliability mechanisms.

The security requirements of users are perhaps the most difficult thing to really determine. In essence, the US military specification states that no user shall get or create any information which is not authorized for that user, and that no user shall be denied access to information which is authorized for that user. This of course begs the question of what is and is not authorized, but these are also explicitly specified in the military domain [Baker83] .

This sort of system has a tendency to always increase the security level of information, and if it were allowed to go on indefinitely, it could eventually result in all information being classified at the highest level of security. This would in turn force users who needed information to be granted higher and higher levels of security clearance until the level became an ineffective means of protection. This is partially cured by the use of special users authorized to perform 'write down'. These special cases are security bottlenecks which violate the model and therefore make the implementation theoretically insecure. Needless to say, these users must be chosen with great care.

The 'Trusted Computer System Evaluation Criteria' was developed by the 'DoD Computer Security Center', and refers to the 'trusted computing base' (TCB), as protection mechanisms (hardware, software, firmware) responsible for enforcing the security policy. There are four hierarchical divisions (A through D, A being the highest ranking), each with numbered classes (except D - the least trusted). Within each class, requirements for the security policy enforced, accountability, assurance that system operates in accordance with its design, and documentation are required. The trusted security system ratings are defined as:

Division D - Minimal protection
    Systems that have been found untrustworthy
Division C - Discretionary protection
    Need to know within a single security level
    Cannot reliably separate security levels
    Audit mechanisms required
    C-2 is finer resolution than C-1
Division B - Mandatory protection
    Separate security levels, label, and protect labels
    Based on a security model
    Demonstrate that a reference monitor has been implemented
    3 classes:
        Labeled protection
        Structured protection
        Security domains
Division A - Verified protection
    Similar to B, but with additional assurance
    2 classes:
        Verified design
        Verified implementation

In 1984 the TCSEC was approved by the NSA and SCOMP, the first "trusted system", was completed by Honeywell and approved within a month. Dispite the government's claim that the TCSEC is not to be used as a standard, it is a de-facto standard in industry.

When networking became important to computer use, designers developed abstractions to allow information throughout the network to be treated as if it were local, and to make distant sites appear as if they were across the room. Unfortunately, protection has not kept up with the recent developments in OSs. This is primarily because of a lack of interest in the research community fostered by the chilling effect brought about by government domination of the fields and a lack of adequate research funding. More recently, computer viruses and other related problems have caused an increased awareness of the global nature of the protection problem, and information protection in OS design has again started to become an important issue.