Information Warfare Considerations

Information Warfare Considerations

Dr. Frederick B. Cohen

Part of this analysis is the result of work performed by SAIC's Strategic Assessment Center under a contract for OSD/Net Assessment's study on Information Warfare.

Copyright (c) Fred Cohen, 1993 - All Rights Reserved

Abstract, Introduction, and Overview:

This short paper is intended to be a brief examination of a topic which is not yet well defined or understood. Even the term Information Warfare is not widely understood or applied, and to a certain extent, this paper may help to define it.

When I was given this task, I was told (despite repeated attempts to pin it down a bit better) to use my best judgment in all matters related to it. How often do people pay you to say what you think without restrictions other than length? Of course, they didn't rule out retribution!

The only rule I was given was that no classified information was to be used, which was easy since I have never seen any classified information. I was also given only 10 days of effort to write this paper, and though I am quite well versed in many of the topics discussed and have thought about many of these issues over the last 20 years, I had never been given an explicit assignment to look at information warfare per-se before this one. So you can count on the fact that anybody with similar experience in information protection (and there are a number of such people working for our major competitors in the world) who wants to know everything contained in this paper, can find it out with about 10 days of effort. In other words, all of our major competitors know everything in this paper and probably a whole lot more.

I was also told that I am to consider the time frame from now until 25 years or so from now, which is perhaps a bit presumptuous. I want to state categorically that I do not know with a great deal of certainty what will happen more than 10-15 years in advance. This is not because it is theoretically impossible, it is not (Our physics tells us that, given the state of the universe today, we can predict the future and past with accuracy only limited by the uncertainty principle), but rather because I lack the proper information and computing cycles to make a complete derivation.

Strange as it may seem, I think that this last point is the fundamental problem in information warfare. As limited as it may be, the theory we currently have is quite capable of telling us nearly anything we may wish to know. Our problem is that we don't have sufficient data and analytical capabilities to perform the required analysis.

I know with a great deal of certainty that a sufficiently advanced information-based attack can completely disable the world's armed forces using only a very small amount of energy, time, and other resources. I am also just as certain that we could build a system that could withstand any specific attack with relatively little impact on operational performance or cost. What I don't know is how attackers and defenders will spend their resources, and that drives up the cost of attaining reasonable assurance.

This also points out two different aspects of information warfare (offense and defense) that I won't be explicitly differentiating in this paper. I point this out because most of what I have read on this topic looks at the offensive benefits and gives only lip service to the defensive aspects. I have even read that the US plan is to base information warfare defenses on the emerging commercial information protection technologies. As someone who has worked on commercial information protection systems for quite some time, I must say this is a scary thought. As you will read later, the state of information protection in the US today is such that a playful teenager could easily bring down the vast majority of our telecommunications by accident. To reduce the size of our military and hinge its success on this thin a thread, is sheer folly.

We know, reasonably well, how to analyze information systems for attacks and defenses, and how to optimize costs while providing good coverage. What we lack is enough skilled analysts to carry out the analysis on the thousands of systems we depend on, the automated systems to make this analytical process efficient, and the support required to carry out this analysis on these systems. The really crazy thing about this whole situation is that doing proper analysis almost always saves far more money than it costs, because we generally find far more cost effective and comprehensive solutions than the designers find without this analysis.

I want to just briefly outline the structure of this paper and stylistic considerations and then move into my assessment. We begin with a definition because that's how I work. My definition of information warfare is not the same as others I have seen, but I like it because it covers what I think are the real issues. Next we look at what I call the universal issues. What makes them universal is that they are common to all information systems, and therefore form the basis for deep understanding of the topic. Next, I discuss what I consider to be the central issues behind selecting the information warfare doctrine and how those issues impact the ultimate outcome of conflicts. I then conclude the main body of the readable part of this text by describing what I think it takes to win. For those without the stomach for detail, you had better stop here, because next come the appendices, which are larger than the body of the document by a fair bit, and are perhaps more boring to read. I include them because I don't like to make unsubstantiated statements, and I think they help support the conclusions in the main body of the paper.

It seems the nature of warfare that it ultimately comes down to them or us at a group level and you or me at a personal level. My style of presentation sometimes reflects this by personalizing pronouns, but it doesn't mean I'm taking sides. The purpose of this paper is to put information warfare as it exists today and is likely to exist over the next 25 years into a framework that is useful for considering the issues, and it is sometimes easier to think about this in personal terms, but over the next 25 years, everything that applies to us may also apply to them, and vice-versa.

With that as the background, here then, is my more detailed assessment.

What is Information Warfare?

I'm going to start with the dictionary. Information is symbolic representation in the most general sense. Warfare is military conflict between opposing forces. I will then define Information Warfare as `the role of symbolic representations in conflict between opposing forces'.

Having defined the term in this way, I feel a need to comment that symbolic representation is and has always been at the heart of conflict, for conflict begins with different viewpoints (interpretations of information), is promulgated with ignorance and propaganda (inaccurate information), is enacted in a fog we wish we could clear (a lack of information), and is ultimately settled by annihilation or truce (a new understanding). Furthermore, some components of information warfare have been practiced for as long as wars have been held, and in a fashion very similar to those we still see today. Over 5000 years ago, there were spies, well defined command and control structures, supply and logistics systems, documented strategic planning, and mechanical cryptographic systems.

Numerically inferior forces with informational advantage have historically dominated in military conflict because of what is now called the force multiplier provided by that advantage. For example, better battlefield intelligence and communications leads to a fighting pace and efficiency that often overwhelms an enemy, better strategic knowledge leads to more well directed weapons design, morale is dependent upon the availability and content of information from home, and psychological operations are centered on impacting the enemy's human information processing. These information factors and many others have had significant impacts on the outcome of wars from Biblical times through to today, and they will likely continue to impact warfare for the indefinite future.

In the same way as the information sciences have greatly impacted warfare, warfare has had a tremendous impact on the information sciences. Much of the development of information science resulted directly from military applications. The first wheel ciphers were a redimentary form of an automatic calculating machine and were developed for practical use in the Civil War. Many telephonic developments early in this century were directly related to World War I. Shannon's information theory was directly related to his work in World War II on cryptanalysis. The first computer applications were in developing firing tables. The modern micro-computer was, in large part, due to the needs of the US space program in the space race, (NASA claims it was responsible, the satellite and missile people claim they were responsible, and I don't know which if either is true, but we'll keep the example in for now anyway.) which was and still is a form of warfare (I seem to recall LBJ talking about the high ground of space). The ARPA-net was the forerunner of modern computer networks.

In conclusion, it is clear that information warfare has existed as long as warfare has existed, and that the two are inseparable. It is only the current exploitation of information technology in weapons and support systems that forces us to consider the issue more directly today than we have in the past.

What are the Universal Issues?

To understand the issues, we have to understand the nature of information systems, their weaknesses, their strengths, and their other properties. These issues are essentially universal in that they always have and likely always will be factors in our understanding of and our ability to deal with information warfare.

I want it to be clear that when I talk about an information system, I am talking about any system that processes any form of information. That goes from people to light switches. All of these systems have things in common implied by the fact that they process information.

It seems to be the nature of information systems that they may deny services that are supposed to be provided; allow for or facilitate corruption of information that is eventually used; make accountability for actions unreliable; and/or permit the leakage of usable information. These seem to be the major issues in information protection, and thus form the goals of the attacker and the concerns of the defenders. Let's look at some more detail.

The goal of information protection then seems to be maintaining availability, integrity, accountability, and confidentiality; while the goal of attacking an enemy's information infrastructure is to cause denial, corruption, repudiation, and leakage.

Issues Today

In light of the universal issues and my ideas of how to win, we might reasonably examine today's situation. A thorough examination would, of course, be quite lengthy, and beyond the scope of a short document such as this. Instead, I think I can give examples from each of a wide variety of systems that will help point out, in some intuitive way, how these issues concern military operations.

Weapons in Action:

We use automated information systems to control weapons when they are in action. In modern warfare, we have made our weapons so fast and so precise that no human can control them adequately. To many, this is the extent of the issue, but there is certainly an ongoing debate about when and where to take the people out of the loop. The movie War Games gave a good example of why we shouldn't take people out of the loop entirely. I should note that for the most part, the technical information provided in that movie was accurate, with the truly notable exception being the computer's final conclusion. Right or wrong, no current computer (as far as I am aware) is yet able to derive that sort of conclusion from that sort of basis.

Our automated weapons systems have made their share of mistakes. The Falklands war demonstrated how an automated missile detection system could falsely identify an incoming missile as a friend and thus not bother to defend against it. Perhaps the designers have now decided that even friendly missiles should not be allowed to blow us up. The computerized `Risks' forum has reported numerous instances wherein automated systems with various problems have behaved less than optimally; the aircraft control system that flips the plane upside down when passing the equator; the missile that decides the Sun is jet exhaust; the French autopilot that has a propensity for being involved in crashes over mountain terrains; \dots and the list goes on.

Of course people aren't perfect either, and the real problems begin in earnest when we mix automation and people together. People come to rely on the automation so much that they do things they would never have done without automation. People also have a tendency to provide input that automation can't effectively deal with. Automation is designed by people, and the imperfections of the designers reflect in the automation.

One thing is certain. Information drives the effectiveness of modern weapons systems. From intelligence, to targeting, to in-flight controls, to impact behavior, to assessment of damage, information systems drive the process, and its exploitation for attack and defense is now a critical factor in military conflict.

Command and Control of Forces:

The use of automated information systems has dramatically changed the way we behave on the battlefield. We now expect that we will fight 24 hours a day, 7 days a week, in all conditions, and that the pace of the action will be such that it is impossible to even keep track of it without computers. We also expect that the ability to perform these command and control functions will differentiate the winner from the loser.

In the tactical arena, properly operating information systems form the core of our ability to fight, and without them, we may be reduced to the operations level that Iraq was reduced to in the Gulf War. Unfortunately, by using these capabilities in front of the world, we can be practically certain that the entire world is working on ways to defeat them, and that many of our friends and enemies have found or will soon find effective defenses.

It is a virtual certainty that against an enemy like China, Japan, Germany, England, or any technically oriented enemy, our command and control system would be rendered far less effective than it was against Iraq. I believe that even a nominal attempt to disrupt our command and control network would have dramatically impacted our fighting ability, slowing the pace, reducing force effectiveness, increasing the time to win, increasing casualties, increasing friendly fire incidents, and causing substantial political fallout. Perhaps a major factor in winning this conflict was the lack of a good national information science infrastructure in Iraq.

This brings out another important issue. You can buy weapons, but you can't buy innovation, you have to do it. In the modern tactical arena, real-time innovation is a real advantage. In order to gain this advantage, your commanders must understand the systems they have available and make good field judgments about how to innovate with them. They may depend on their technical staff for implementation, but they can not depend on their technical staff for giving them creative ideas about how to solve tactical problems with unique applications of available resources.

Supply and Logistics:

The modern supply system is automated to the point where subtle corruptions in computers could destroy our ability to fight. Knowing how, when, and where materials are used indicates how many forces, what they are doing, and when they intend to do it. Making the supply system unavailable quickly cripples a military. Plausible deniability leads to black markets, and all of the criminal activity associated with them.

The inherent independence of weapons systems helps limit the effect of attacks against them, and in tactical operations, human interactions still play a major role, but in supply and logistics, global coordination is necessary for efficiency, and the net effect is that the vulnerability is much broader. Many people have to have access to the details about supply, they are not all cleared, they are not necessarily all even citizens of the same country, suppliers are global and may not be bound by national interests of the combatants, automated information transfer is common, and these systems are not typically designed for protection.

The networking situation today complicates these issues even further. We are in the process of dramatically increasing our networking of systems, particularly in the supply and logistics areas, and the new global military network designers are only beginning the process of considering the parameters of interest for such interconnections. We commonly depend on public networks for links, and in many cases, link encryption is not being used to secure these channels. With the global networking situation today, it is possible that your supply and logistics communications could pass through nodes in enemy territory, and there are no controls in place to cover this issue. The public networks we use have shown substantial weakness both in the presence of relatively non-malicious attackers, and in accidental incidents that have brought down substantial portions of the networks for substantial periods of time. We have no good reason to rely on these systems in time of conflict, and little of the expertise needed to regain the leadership role we once had in these areas.

Maintenance of Equipment and Facilities:

In World War II, any Joe could perform some level of maintenance on field weapons, jeeps, tanks, and even some planes. In modern warfare, very few systems are field maintainable. The growing dependence of systems on computers has reached such extremes that it is probably impossible to start a jeep without a working computer. The advantages of the newer systems from a standpoint of reliability, availability, accuracy, and performance are clear; but the maintenance problem has become far more difficult.

Modern aircraft maintenance depends on computers in the engine, computers in the wings, computers in the fuselage, computers in the cockpit, computers on the ground, computers on plug-in boards, computers linked to supply, and computerized testing and diagnosis. Almost all of these computers are poorly protected against corruptions that could allow planes to fly when they are unsafe, leakage that could tell the enemy the current maintenance status, or denial that could make maintenance impossible over extended periods.

Aircraft maintenance currently uses one of the most automated maintenance environments, but we are rapidly moving toward similar levels of automation for our other systems, and this represents both an advantage when it works, and a disadvantage when it doesn't. Add to this the fact that few if any of the line maintenance people know how to maintain the computers that help them perform maintenance, and you have a potential disaster on your hands.


As in the maintenance of our equipment, the medical treatment of combatants has become more and more dependent on automation. We place our medical crews in close proximity to the battlefield, and the effect is far better survival rates for the wounded. With modern battlefield medical care, over 90% of the deaths in the Civil War could have been avoided, as could the majority of amputations. The natural side effect of this level of medical care is that troops return to battle far sooner and fewer soldiers are needed to fight the same battles. The effect on morale is also very positive, since nobody wants their children, their loved ones, or themselves to be cannon fodder, no matter how important the conflict.

The same issues that apply to maintenance of equipment and facilities apply to medical systems, except that people are a lot more dear than jeeps, and medical equipment tends to be more delicate and more complex than engine maintenance equipment.


Modern intelligence depends on a very wide range of signal systems. Human, electronic, optical, sonic, and other gathering technologies must be kept secret in order to prevent their elimination as sources. Systems must be kept operational, and their status must be kept secret in order to prevent exploitation. Analysis is often computationally intensive, and the accuracy of analysis can have dramatic impacts on decision making in conflict. The results of analysis have to be exploited in such a way as to lead toward victory without giving away the information source. The Battle of Britain touted radar as the indicator of attacks, while it was the breaking of the Enigma cipher that really won the day. If the Germans had known this, they would have easily won air supremacy.

Current intelligence technologies exploit only a small portion of the exploitable information sources, and the selection of what to exploit is critical to effectiveness. In the current information intensive environment, the bandwidth of digital communications is so high that there is no hope of reading and understanding it all. If we controlled communications today like we controlled letters in WWII, we would destroy the international financial system, and yet it is simple for a spy to send a few bits of information by making an electronic funds transfer. One of the reasons that cutting a country off from international trade is so effective is that it dramatically limits their signaling capabilities, which makes the intelligence reporting problem, foreign destabilization, propaganda, and other information warfare areas far harder for them while making our gathering problem far easier.

Along with intelligence comes counterintelligence. In this arena, the ability to corrupt the integrity of information sources is key to success. Whether it be creating life-like simulations of troop movements to cover actual operations, jamming signaling systems to produce denial, corrupting the signals sent to or from the gathering mechanisms to forge desired observations, exploiting human sources with false or misleading information, creating systematic observable leaks to allow attacks to be detected and misdirect enemy operations, or any other similar technique, selective corruption is the key.

A final note is in order here, particularly in light of the Gulf War. It may well be that the premier intelligence gathering agency in the world today is CNN, and the battle of the news networks may be worth studying for understanding how intelligence gathering should be done. In the live coverage of this conflict, both Iraq and the US military commands claimed to be watching CNN for the details. The live feed from Bagdad during the initial attack may have been the best source of battle assessment available at that time, and I recall the Pentagon briefer responding to questions by telling the media that he was watching CNN for details. In the Soviet power shift, the competition from CNN forced the other networks to become extremely aggressive in their coverage, and as a result, we saw live feeds from Moscow that probably provided better observations of the moment to moment situation than either of the sides that were party to the conflict were able to generate. In Tiananmen Square in China, we had news coverage of people face to face with tanks that I seriously doubt any other intelligence agency could have come up with.

Local Support Systems:

In many cases, military operations depend on non-military support systems. Most US government telecommunications goes through common carriers with commercial interests. Electrical power, even in field operations, often depends on the local capabilities. Water and sewage are normally derived from local sources. Local roads, bridges, and tunnels are used wherever possible. Natural resources such as rivers and mountains are exploitable. In short, military operations use whatever they can from the environments they operate in. In order to operate in these environments, systems have to be connected, and these connections create opportunities for exploitation.

Scorched earth is one way to prevent the reuse of local support systems, but there are also opportunities for exploiting the connection of communications, power, and other information related services to existing infrastructure. For example, we can leak information from power supplies entering non-tempest computing environments, we can cause many systems to operate or fail by providing properly conditioned electrical power, and we can tamper with equipment left behind so as to plant corrupting influences in reused facilities. We can also have dramatic impacts on local support systems which exploit automatic controls of various forms, and selective denial can often be attained via precision attacks, leaving long term restoration far easier.

National Infrastructure:

Battlefield equipment doesn't operate in a vacuum. It must be created, supported, maintained, upgraded, and enhanced by an infrastructure, and that infrastructure is every bit as important to success in any long term conflict as the most rapidly deployed systems. Destruction of infrastructure may be far easier to accomplish than destruction of the military forces it generates. Boxers punch in the stomach to keep their opponent from breathing, and eventually, this allows them to prevail where a direct attack would fail.

Most information infrastructures are critical to operations over a time frame of days to weeks, are very poorly defended, and are prime targets for attack. It would be an easy matter to disable the electrical and telecommunications infrastructure of most industrial nations, and the resulting impact on national infrastructure would be devastating. In the US, most financial transactions would grind to a halt in seconds, most military support communications would be disabled, and although radio would leave substantial survivability, much of our supply and support system would quickly disappear.

National Industrial Base:

Many of the most critical information technologies are manufactured in Taiwan (over 80\% of PCs according to their government literature), and a recent fire in a glue plant in Japan caused a dramatic increase in the costs of integrated circuits because they are the only supplier of the resins used in integrated circuit manufacture. To be a leader in information warfare, it may be critical to be a leader in information technology as a whole, from start to finish.

Some important factors give advantage to providers of information technology. Among them, the ability to place covert hardware on the integrated circuits that drive the process, the deep engineering knowledge of how and why systems work, the ability to place covert software mechanisms in the operating environment, the ability to include subtle systemic weaknesses, the ability to gain financial advantage over competitors, the ability to control the information technology of competitors, the ability to assure leadership in key areas, and the ability to create custom implementations for military applications without excessive expense.

The vast majority of national industrial bases are poor in critical areas of information protection, and in this particular area, the US is far behind competitive nations and falling further behind all the time. This traces back to the fundamental issues of how we educate our children, the computer systems we tend to use, the government's involvement in promoting or preventing the use of key technologies, the costs associated with protection, the lack of computer crime reporting, and many other issues. To succeed in this key area, nations need to make fundamental changes in the way they view national interest.


In the strategic arena, information warfare is a key element in deciding which technologies to work on, how to apply them, which technologies opponents are working on, and how to defend against them. Current US military doctrine seems to emphasize the ability to use our information systems to enhance command and control, and to destroy the enemies command and control by disabling their information systems. If done successfully, the effect is a force multiplier, and the doctrine therefore asserts that we need fewer forces to accomplish our goals.

To move toward better and faster communication between and coordination of forces, to consolidate global information sources into instantaneous and usable battlefield information, to reduce lifecycle costs, and to allow flexible distribution of function, we are in the process of standardizing all of our information networks and systems.

I may be the lone dissenting voice in a choir of believers, but to me, when you take the combination of a smaller force and a highly homogeneous information structure, you have a recipe for disaster. The problem is that any systemic flaw that can be exploited by an enemy means a total collapse. A collapse doesn't have to last very long in modern warfare for a small force to be wiped out. A few hours will likely do, and in some cases a few minutes are enough. Here are a few examples that I think have serious potential.

The list goes on and on. By standardizing too far, we create systemic vulnerabilities, while not standardizing far enough leads to gross inefficiency. We need to find a middle ground which gives us the advantages of standardization without the risks of total collapse and we need to have a backup position that allows us to survive in case systemic flaws cause us to lose our command and control advantages.


When we connect two secure systems together, we do NOT necessarily get one larger secure system. In fact, unless we plan very precisely, we will likely end up with two insecure systems. There is no adequate theoretical basis for resolving these problems in a systematic way as of this time, and the limited theoretical results we have are not being applied.

The US is now in the process of integrating all military systems so that they can interoperate more effectively, because this improves the advantage in being able to coordinate activities.

The net effect can be the strongest military coordination system ever fielded, but it can also be a disaster unless we consider these issues more deeply than we have considered them in the past. When the fog of war befalls our information infrastructure, we had better have a workable mechanism for restoring clarity in short order, because the enemies we face may have superior numbers, and more brute force, and our informational advantage could be easily neutralized with a well thought out attack. Consider this:

We must be able to resolve the brittleness issue in our critical systems and their interconnections, and we must get control over the use of our own information infrastructure, or we will literally kill ourselves by creating a dependency on systems that are fundamentally vulnerable and beyond our control.

Systemic Issues

Another dimension of the vulnerability that exists today is related to the way we design systems. The whole engineering process ignores protection issues, and as a result, there are endemic flaws in the way we design systems that inexorably lead to protection problems. I have a short list of issues in this dimension that I think may be revealing. The original list was created over a 15 minute period as part of a presentation on vulnerabilities, and is limited to three examples per issue. If I seem to be concentrating a bit too much on Unix, it is because of the tendency of DoD applications toward so-called open systems (a.k.a. Unix), and not because Unix is particularly weak. In fact, some of the more secure Unix systems are the most secure systems ever to exist.

Conceptual Flaws: The conceptual framework used to design information systems is often inadequate to meet the protection needs. The TCSEC, which specifies the requirements for NSA approved computer security systems has no requirement regarding denial of services, and yet in most field operations, denial of services is more devastating than leaking secrets. Fuzzy logic is used in system design because it is easily related to the words people use to describe their thought process, but most designers fail to note that it can produce palpably inconsistent conclusions if applied more than two levels deep. Many `AI' projects have been sold to the government, but the designers often fail to mention that they are proposing to solve undecidable problems. Failed concepts lead to failed systems, and we have to get our concepts right first, or the information warfare arena will be full of duds.

Design Flaws: Designers of systems often fail to anticipate or react to the fundamental issues that face physical systems. Hughes recently fielded a space-based system which had an analog input timing error. They had calculated that the likelihood of this error affecting operations was only about 1 in 128, and rather than design a system with a lower likelihood, they risked it and lost. In the 1970s, AT+T fielded a set of underwater repeaters for their transoceanic long lines, and after quite some time, found that there were data-dependent hardware faults that caused them to crash. It was only when a maintenance person happened to be there to observe one such failure that they tracked down the source. Some video displays have relatively low emanations, which cuts down the tempest impacts of using commercial video display units, but systems meeting the current FCC specification still emit enough to be picked up reliably by people using about $2,000 worth of hardware at distances of several hundred feet.

Hardware Flaws: The first models of the TRS80 personal computer had hardware floating point errors that produced wrong results in certain floating point computations. No error messages were produced, just wrong answers, which obviously introduces an integrity problem. I programmed the PDP11-40/E at one point in my life, and in this particular machine, it was possible to implement microcode that would do permanent hardware damage. A malicious program would have a field day with this. Cryptographic equipment is not always designed to eliminate all attacks. One example is the signature given off by many such systems that indicates when signals start or change. By selective jamming, communications can be destroyed, and traffic analysis is sometimes possible as well.

Operating System Flaws: Operating system examples abound, but I will choose some of the classic blunders. In early timesharing systems (and occasionally in not so early ones), the swapping mechanism used to allow programs to share memory space swapped itself out of memory, leaving a situation where it was not resident in memory to swap itself back in, and thus no more memory management was possible. Unix, a popular operating system that is becoming widely used in the DoD as a sort of standard has many configuration specific parameters that can be incorrectly set, and in many cases, they allow the entire system to be shut down by a program of only a few bytes in length. Of course the greatest all-time catastrophe to information protection was the advent of the DOS operating system, which was specifically designed to make protection impossible and then disseminated to 90\% of all computer users globally as the common standard. The designers once told me in person that they did it on purpose, and that they intended to do the same thing for OS/2 - which they did!

Computer Language Flaws: Computer languages have had their share of problems as well. The Thompson `C' compiler designed for the NSA with an embedded login attack is a classic. This early Unix compiler was designed to allow Thompson to login to any Unix system, regardless of changes to the password file, by recognizing and placing a Trojan horse in the login program at compile-time. In addition, the C-compiler was set to propagate this bug in itself whenever it was compiled. Please note that virtually every existing C compiler was compiled either directly or indirectly by a Thompson C compiler, and that bug could be present in Unix systems currently being fielded. The APL language is often called a read-only language because the complexity of even a few lines of code is beyond the ability of most humans to understand. It should be an easy matter to place a Trojan horse into an APL program, and even the original designer probably couldn't pick it out without a lot of effort. Another classic compiler problem that I have encountered is the optimization of code right out of a program. Sometimes, code optimizers have design or implementation errors and don't realize the side effects of a section of code. I have had subroutines that worked in test compilations and failed after production compilation because of this problem.

Library Flaws: Library routines are commonly used by many programs, and errors in these routines are particularly critical because of their widespread impact. Error accumulation in floating point calculation is often a problem to consider in a numerical analysis application. Even the very well designed libraries in use for a long time require analysis before application in order to show the error impacts of computation on final results. Almost no production programs in use today were implemented with these errors considered. `C' library functions with the same name sometimes operate differently under different compilers. This makes the design of portable code quite complex, and it often doubles or triples the total code size, and introduces subtle errors that are not often caught in initial testing at each site. Input and output are also non-standard, and it is quite common to have to make special adaptions for IO libraries that operate differently on different systems. The DoD standardization on Ada does not give me great comfort either, but that is a different story altogether.

Application Flaws: Applications are often fraught with errors even when they are implemented on relatively error-free platforms. It is common for input and output routines to ignore error conditions and treat them all as the same error (failed). Most applications written for use on timesharing systems ignore the fact that things can change between steps in a program. Bounds checking is often turned off for performance reasons in production environments, and this often makes it possible to overflow an input buffer, causing alterations to other data or even the memory resident portion of a program. Even the order in which we perform calculations can affect their results because we usually use finite arithmetic fields and improperly handle overflow conditions. $80+80-70$ may not produce the same result as $80-70+80$ in a system which has undetected overflow at 100!

User Code Flaws: Even in systems without the usual user accessible programming languages, we can have examples of user code that cause problems. Most current database systems, spreadsheets, and even word processors have general purpose user-programmable macros. As we noted earlier, any system that allows programming and sharing is susceptible to viruses, and there have been computer viruses written in spreadsheet and database macros. The Unix `sh' command interpreter is very agile at producing spoofing attacks wherein the user walking up to a terminal believes they are typing into the Unix login prompt while the spoofing program records their user ID and password and then allows a normal login after an appropriate error message. In a simple test in 1985, I captured the passwords of all of the users of a timesharing system in a few hours with this attack, and nobody suspected anything. In multiprocessing environments, paging monsters can easily be created to bring system performance to a near standstill.

Usage Flaws: Even information interpreted purely as data can cause problems in most modern computer systems. The most common integrity problem appears to be errors and omissions, and this follows the old saying: `garbage-in, garbage-out'. This is of course nonsense - we should be able to detect most of the garbage coming in and produce appropriate repair or response to clean up the mess. Inconsistency is common in modern databases; most don't even check the state against a zip code to find the simplest data entry errors. File format errors and the translation between different coding schemes still give us problems. I get a monthly tape to translate from EBCDIC format to ASCII format, and the people who generate the tape consistently claim that a `signed' number is actually an `integer'. The difference is quite subtle - the last digit has an extra bit that indicates the sign, but try using the wrong data conversion routine, and see what you get out.

Administrative Flaws: Systems administration is an area ripe for enhancement. Many systems are still shipped with widely known default passwords (although the number of these is finally and mercifully dropping). Most systems have default operating system settings that permit numerous attacks, and every system I am aware of without a mandatory access control policy presents an essentially impossible administrative task in determining the proper protection settings for files. As an example, a typical Unix system (i.e. the one-user Unix-based PC I am currently sitting in front of) has over 70,000 files with over 10 protection bits per file. That's over 700,000 protection bits that have to be set right in order for the system to operate properly. With all of these protection bits, Unix doesn't come with substantial tools to facilitate proper settings or detect improper settings.

Procedural Flaws: Procedures also lead to many problems. In the case of the AIDS diskette sent to some 30,000 readers of a PC magazine, something like 10,000 institutional users with specific procedural policies in place against using outside disks used these disks and sustained damage. It is also commonplace to have backups stored at the same physical location as the system they back up, which means that the most common events (i.e. fire, flood, etc.) are likely to destroy both the systems and the backups intended to be used to restore them. Similarly, termination procedures in most organizations do not automate the process of removing access to information systems. It is quite common for people to retain computer access over global networks for several years after leaving a position.

The after-the-fact approach we commonly take to protection has cost us a lot of money, and places us at great risk. We must find ways to incorporate protection into our engineering processes at every level if we are to eliminate the systemic protection problems we see today and do so at an affordable cost. By building protection in, we not only save money, we get better protection.

The Doctrine of Information Warfare

Military planning starts with doctrine, which drives strategy, operations, and tactics. The doctrine of information warfare is, in essence, that all other things being equal, informational advantage is sufficient to win. One obvious extension of this doctrine is that with increased informational advantage, fewer forces are required to win, and this is certainly borne out by history.

The clearest and historically best supported way this advantage works is in the area of command and control, where sufficient speed and agility leads to a condition where the enemy is unable to attack you or defend itself effectively. This is what happened in the Gulf War, when coalition forces were in constant communication and had essentially full knowledge of Iraqi forces, and Iraqi forces were effectively cut off from their leadership and therefore unable to coordinate enough to move or shoot. Somewhat less obvious, but equally important components of informational advantage lie in other areas of armed conflict, ranging from the ability to supply troops to the ability to determine which weapons to develop.

Since today's information technology dramatically decreases the cost of attaining informational advantage compared to other forms of advantage, since the United States has some special technological capabilities in this area, and since limiting cost is increasingly important to the US economy, it makes sense that this advantage has become a central element of US doctrine. The natural extension of this (or any other) military doctrine is the development of specialized systems and technologies which support the implied style of operations and tactics.

There are some issues relating to the information warfare doctrine that are worthy of consideration, and I would feel remiss if I didn't bring them up in my assessment.

The doctrine of information warfare seems to be sound, but we have to be careful to keep doctrine from being dogmatically applied. We will have to be more diligent in our decision making processes, and more mindful of details than ever before. The rate at which we will be making vital decisions will increase, and the values associated with those decisions will also increase. Making increasingly critical decisions in decreasing time frames with increasing effectiveness implies that we will have to know more ahead of time and be able to use it more accurately on a moment's notice. This means more and better training and education, more dependence on automation, and more vulnerability to analysis of our techniques, which implies more and better analysis on our part to retain effectiveness, more speed, more accuracy, more criticality, more training, and around and around we go. Pandora's box has been opened \footnote{It was the inevitable result of history and technology and nobody is to be `blamed'.}, and the information warfare arms race is now underway. Where will it all end?

Here's where I become speculative. I think the implication is that tactical decisions become more automatic and thus automatable, and so we move more and more toward strategic decision making as the determining factor in conflict. That was the easy part. If we are eventually able to develop information systems to perform higher level mental functions adequately, they may take over the strategic, doctrinal, and political levels of conflict resolution and even go beyond that to eliminate the underlying reasons for conflict. To date, I have not seen any substantial evidence that this is anything other than idle speculation and good science fiction, and I think we should keep this in mind during our procurement processes. A small study may be of some value in assessing the proper level at which to use reflexive control, but a big push to automate the higher mental functions of strategic planning is probably not a good idea at this time, and we should not be duped into spending our money in this way when it can be spent very cost effectively in other areas.

Beyond that level there are too many possibilities to go through all of the lines of analysis in this paper, but the end results seem to range from a global Orwellian society to anarchy, with all possibilities in between depending on how this form of warfare progresses. Without going into further detail, let me just mention that as time goes on, we have to remember that technology is not limited in its applications to the benevolent actions of the good guys. It is a two edged sword, and we have to make sure we don't get cut.

How to Win

The purpose of studying information warfare, like any other kind of warfare, is to try to figure out how to win, and to make certain that you don't lose before you have the chance to win. The doctrine seems to be sound but is still a bit illusive in its implications. The systems that will fight in these wars are slowly maturing, but they are nowhere near reaching their potential. The strategy and tactics are an ever changing set of methods developed based on the warriors' view of the state of the conflict, their training and inherent capacities, and the available resources. To win, we must try to change this state of affairs to where we; understand the implications of the doctrine in more detail and are able to make prudent decisions; understand the potentials and limits of our information systems and exploit them in the proper ways; have a sound understanding of strategy and tactics and the way they change in light of the information warfare environment.

I am not an expert in military doctrine or strategy and tactics, and whereas I have taken liberties in my assessment in those areas up till now, I am not presumptuous enough to do so when I give professional advice. I will thus restrict my comments here to the areas where I have expertise; the information sciences.

Top Level Goals: For information systems, we know the top-level goals; to attain and maintain availability, integrity, accountability, and confidentiality in our information systems and to deny these to the enemy. The problem then is how to attain these goals cost effectively.

It's not a science yet: It turns out that I have done a fair bit of analysis of this particular issue, and I can tell you that, for now at least, this is not a science, but an engineering and management discipline. For each example I see, there are different tradeoffs, and I never know what the best solution is ahead of time. I commonly see reports that make claims about how one technology is clearly better than another because it reduces this cost or increases that parameter; but the reality is often quite different. To (mis?)quote Mary Poppins, ``Sometimes even a small thing can make quite a big difference''.

Cheapest != Best: We can of course make cost equations and assess commensurable values to the advantages of systems, the costs of attacks, and the resulting reductions in values. In practice, this is commonly done, but the real problem with such cost equations is that accounting of this sort has never really been very effective in warfare because the values change so rapidly and unpredictably, and there are many subtle dependencies we may not be aware of. If we use historical data to predict attack frequencies and assess costs, an attacker can rely on us not defending against the least common attacks, and will exploit this knowledge when an attack will hurt us most. It's like preparing for the last war. For now, let it suffice to say that simple cost accounting may keep things cheap, but will not keep us safe.

We have to create the experts: I cannot detail the many analytical results in this field in the limited space allotted, but I can tell you that very few people are aware of a substantial subset of them, and even fewer have been well educated in enough of them to be able to apply them effectively in concert. Applying them in isolation commonly results in systems that may appear to work well but have deep flaws that may cause them to collapse under a certain sort of pressure. If we are to have skilled individuals capable of performing proper analysis of this problem, we will have to create them, and if we do not, we will end up at the mercy of those who do.

Theory alone is not enough: As a scientist, I believe in the scientific method as an effective technique for problem solving. To complement analysis, we must have experimental feedback, or our science will become abstract mathematics, and this loss of grounding will result in systems that fail. There should be no great mystery here. We have to apply historical data to the process, learn what has differentiated winners from losers, and figure out how to apply that knowledge to win. We also have to interpret that knowledge in light of the current situation, which means we have to perform experiments and learn from them as well.

A different history: A comprehensive coverage of the history of information warfare is clearly beyond the scope of this paper, but I will start this process by citing Mahatma Ghandi who once said ``Take care of the means, and the ends will take care of themselves.''

Now it may seem strange at first to cite Ghandi in discussing how to win wars. He was devoutly opposed to violence. If it will make you feel any better, consider that Ghandi won a war against one of the best prepared military forces of his time using no weapons, and no force, other than the force of information! By this standard, he was one of the great information warriors of all time, and it is somehow entirely appropriate that his understandings should be studied.

Know the enemy: My father once commented when we discussed my admiration for Ghandi, that if Ghandi were up against Germany, he would have been killed too early to have any effect. His point was that Ghandi could only succeed against the British because of their morality. The point is well taken, and in fact, that is exactly why he was such a good information warrior; he knew his enemy and he found and exploited their weaknesses. He studied English law in England, and he knew British culture and tradition. He understood that their military force could kill a lot of his people, but he also understood that their system of justice would not allow them to go beyond a certain point against a non-violent opposition, and that when they inevitably went beyond that point, it would be politically devastating.

Take care of the means: Now before I go too far down this road, I will return to the second point I really wanted to make by quoting Ghandi (the first point was that we should study historical information warriors and that they may not be the people we normally associate with military conflict). For long term victory in information warfare, I believe we should concentrate on the means rather than the ends. That is to say, we should become expert at the process of fighting information wars rather than simply becoming the largest purchasers of information weapons. We still have to purchase information weaponry, and to a large extent, we do that already, but we don't do it very well, because we don't have the expertise to make the best decisions.

Education is key: We should educate our best people on the nature of the problem and give them the knowledge required to find solutions. As fundamental underpinnings, they should know the basics of physics, mathematics, information theory, the theory of computation, engineering, psychology, and the other fields that ground our science and technology. For the information warrior, this goes far beyond what we normally teach to undergraduates, but not beyond what we can teach them if we design the proper curriculum.

Special attention should be paid to the defensive aspects of information warfare because the offensive aspects are now commonly covered in military education, through the history of warfare, modern military doctrine, strategy, tactics, and practical training.

We should give our people the best basic knowledge about differentiating the possible from the impossible and the likely from the unlikely, and we should help them learn to evaluate everything they encounter in light of that understanding. Only this basic understanding will lead our decision makers to make better decisions, and ultimately that will have the greatest impact on our ability to win.

Build the right infrastructure: Deep thought is not for everyone, and clearly, we are going to have great difficulty if we try to force our military into becoming Ph.D.s in information warfare. A spectrum of skills are needed in order to contemplate, design, implement, test, deploy, and use technologies. The entire spectrum must be incorporated into our education and training programs in order to have an effective fighting force in the information battleground. If this is going to work, we must build an infrastructure that promotes and supports this understanding.

Act now: Time is short. We could easily lose the next war if we depend on the information warfare capabilities we used to win the last one. Our infrastructure as it exists today is fraught with problems that can be easily exploited, and we simply aren't applying enough resources in the right way to change this situation. We must act to mitigate these problems, because if we let it slip through our fingers, we may never have this chance again.

I want to close on a positive note. I believe that we have the proper mix of raw materials necessary to become an information superpower. I also believe that by becoming an information superpower, we will remain a financial superpower, and that the mix of these two is clearly in our best interest. The path to this goal is also being tread by our global competitors, and as the saying goes: ``close only counts in horseshoes, hand grenades, and nuclear weapons''. I believe that we hold our destiny in the palms of our hands, and that if we choose correctly, we will prosper.