IW-D Table of Contents


INFORMATION WARFARE - DEFENSE


APPENDIX C

A TAXONOMY FOR INFORMATION WARFARE?

Taxonomy:

1. The classification of organisms an ordered system that indicates natural relationships.

2. The science, laws, or principles of classification; systematics.

3. Division into ordered groups or categories: "Scholars have been laboring to develop a taxonomy of young killers" (Aric Press).

[French taxonomie: Greek taxis, arrangement; see TAXIS + -nomie, method (from Greek nomia; see -NOMY).] American Heritage Dictionary

Summary: A taxonomy of information warfare that describes information warfare was derived by the Defense Science Board Task Force on Information Warfare Defense. Unfortunately, as in most cases where both objects and processes are present, this taxonomy would not scale in a linear manner beyond three levels. This is the result of the number of permutations and combinations by which the attacks could be mounted against a particular process, over variable time periods. The derivation of the taxonomy is discussed latter in this Appendix.

However, by adopting concepts from Joint Pub sources and inputs of the Threat and Policy Panels of the Task Force on Information Warfare (Defense), a standard vocabulary for use in threat alerting and for the assessment and reporting of defensive preparedness, tied to specific information dependent processes, was developed for information warfare defense.

Such a tailored warning, assessment and reporting system can and should be developed for use in each civil agencies and in various domains of commercial sector such as electrical power and financial services. A caution: Whatever schema is used to evaluate the operational readiness of information dependent processes and activities, it must be timely and reflect the current state of the security policy being implemented, the supporting infrastructures (computers, communications, electricity and other supporting utilities) and the training status of the personnel, both systems administrators and users of information and information systems.

A range of standardized scenarios should be promulgated for use by the components of the Department of Defense in conducting preparedness surveys and for use in military planning. A proposed partitioning of increasingly robust assessment scenarios for use in planning and assessments follows:

1) accident. (The inclusion of accidental failure is important because in many cases the cause of failure may never be determined but it is still important to know the range of potential effect on the information dependent process.),

2) amateur hackers,

3) experienced hackers,

4) well-funded non-state group or actor able to purchase or hire advanced information warfare capabilities,

5) state-sponsored information warfare, and

6) state-sponsored information warfare with the active collusion of an authorized insider (worst case).

A standardized set of methods for assessing information dependent processes should be used so that reporting is consistent across a wide range of information dependent activities. A proposed partitioning of assessment methods follows:

a) an unknown information assurance capability for a specified assessment scenario,

b) an engineering estimate of information assurance, based on a review of design and recovery plans, but no physical testing for a specified assessment scenario,

c) an engineering estimate of information assurance, based on design parameters, simulation exercises, and the review of detection capabilities and recovery plans, but no physical testing for a specified assessment scenario,

d) an internal information assurance audit by an internal but independent organization, based examination of the written record of security and accidental incidents and responses from a live contingency plan exercises designed to simulate a specified assessment level defined above,

e) an internal information assurance audit by an internal but independent organization, based on testing and examination of security and accidental incidents and responses from a live contingency plan exercise designed to simulate a specified assessment scenario defined above. and

f) an information assurance audit by a totally independent security assessment organization, based on testing and examination of security and accidental incidents and responses from a live contingency plan exercise designed to simulate a specified assessment scenario defined above (most stringent test case).

Note that all organizations would not be expected to meet the most stringent assessment scenario. The application of an evaluation level would be determined by the criticality of the information dependent process to the overall activity.

In such an information assurance, planning, testing and evaluation construct, the most robust and resilient organization would have demonstrated a 6-f capability of information assurance.

Although not a taxonomy of information warfare, this approach provides a standard vocabulary for assessing and reporting operational readiness of organizations to carry out information dependent processes in an information warfare environment. This construct also provides a basis for developing an information warfare readiness reporting process.

Within the Department of Defense, suitable information assurance reporting criteria along the above lines should be added to the Status of Resources and Training System (SORTS) (or a SORTS-like report); Communications Spot Report (COMSPOT) and daily Communications Status Report (COMSTAT); annual CINCs Preparedness Assessment Report (CSPAR); Combat Support Agency Assessment System (CSAAS); and the Base Defense and Operations Security evaluation schemes.

In addition to preparedness assessments, which address specific information dependent processes, a generalized threat warning system is needed to communicate a heightened level of alert to numerous interconnected information dependent activities.

Design of a warning system is complicated by the interconnectivity of the national (and global) information infrastructure. A heightened state of alert must extend to all connected systems but at higher threat levels appropriate actions could include disconnecting from the infrastructure so a warning method is needed that does not fully depend upon the interconnected infrastructure. Conceivably, preparation could include "war modes" that extend across lower levels of network protocols (physical level through transport layer protocols). In addition, a workable information warfare alert and response process will require a comprehensive legal. regulatory and operational infrastructure.

Detection of information warfare attacks will likely not come directly from intelligence or the managers of individual systems. "Warlike" attacks may have many diverse targets but probably will not follow the pattern of normal thefts or disruptions caused by amateur intruders, except as cover, concealment or deception.

Reporting of incidents, particularly of attacks on civil information users of national interest, will neither be automatic nor directed to a common point unless a distributed structure is created now, like the Center for Disease Control. Creation of a distributed reporting structure that filters upward with a focus on finding broader and broader patterns through indirect measurement and iterative analysis is essential as most "problem" detection will take place locally in a very decentralized fashion without the necessary visibility to detect the linkages between apparently unconnected events.

The Tactical Warning/Attack Assessment functions will require the synthesis of diverse and apparently unrelated information. Specialists in offensive information warfare should be included in the make-up of Department of Defense and national TW/AA centers to ensure suitable tradecraft is applied to the IW/AA process.

On receipt of an information warfare alert message or threat condition, the individual managers of information dependent processes could initiate appropriate defensive actions to include disconnecting from the shared infrastructure. Although Alert Conditions could be issued as a result of strategic warning, most would be triggered by an aggregation of tactical warning reports of individual incidents which will show a pattern of an attack rather than isolated incidents.

A set of proposed information warfare (IW) Alert Conditions and Responses for use by the Federal government, in both civil and national security activities, follow:

IW Alert Condition I

Situation - Normal

Normal level of threat from accident, crime and amateurs

Normal level of unexplained activities in all sectors of the nation

Response Required:

Normal protective actions to include:
Normal level of unexplained activities in all sectors of the nation

IW Alert Condition II

Situation - Perturbation

a) 10% increase in incidence reports, either regional or within a functional information dependent activity of national interest

-Sector systems, such as medical systems or financial systems

-Telecommunications service providers -- Public utilities

b) 15% increase in all incidents

Response:

Increase incident monitoring and cooperative analysis Look for patterns across a wide range of variables

Alert all agencies to increase awareness of activities

Begin selective monitoring of critical information services

IW Alert Condition III

Situation - Heightened Defense Posture

a) 20% increase in incidence reports across the board, even with no apparent connection

b) Condition II with special contexts

Response:

Disconnect all unnecessary connections

Turn on real time audit for critical information systems

Begin mandatory reporting to central manager

IW Alert Condition IV

Situation - Serious Situation

a) Major regional or functional events that seriously undermine U.S. interests

b) Conditions II or III with special contexts

Response:

Implement alternate routing

Limiting interconnectivity to minimal states

Begin "aggressive" forensics investigations

IW Alert Condition V

Situation - Brink of War

a) Widespread incidents that undermine U.S. ability to function

b) Conditions III or IV with special contexts

Response:

Disconnect critical elements from the public infrastructure

Implement WARM protocols

Declare state of emergency

Prepare for warfare, including retribution against aggressors using the full force of the U.S.

Consideration of A Taxonomy for Information Warfare

Many of the definitions, concepts and words that follow are drawn from the Joint Publication System, and in particularly from the Joint Doctrine for Command and Control Warfare and the Joint Reporting Structure.

The central concept of information warfare is straightforward: The ultimate target of information warfare is an information dependent process, whether human or automated. The use of the word "warfare" should not be construed as limiting information warfare to a military conflict, declared or otherwise.

The root concept of information warfare is offensive in nature. In turn, the concept of information warfare defense flows from the offense. This is not surprising as most defensive actions (counter-air, anti-submarine warfare, counter-mine, anti-crime, anti-drug) only have meaning within the context of action-reaction. Offensive information warfare targets information or information systems in order to affect the information dependent process, whether human or automated. Defensive information warfare protects the information dependent process, whether human or automated.

The question of interest is whether a useful taxonomy information warfare can be derived.

In Joint Pub 3-13.1, Joint Doctrine for Command and Control Warfare, an "information system" is defined as the organized collection, processing, transmission, and dissemination of information, in accordance with defined procedures, whether automated or manual. This includes the entire infrastructure, organization, and components that collect, process, store, transmit, display, and disseminate information. It includes everything and everyone that performs these functions -- from a laptop computer to local and wide-area voice and data networks, broadcast facilities, buried cable and, most importantly, the people involved in transmitting, receiving, processing, and using the information. People, decisionmakers at all levels, are the most important part of the information system.

However, information systems themselves are part of larger information infrastructures. These infrastructures link individual information systems in a myriad of direct and indirect paths. The growing information infrastructures of today transcend industry, media, and the military and includes both government and non-government entities. The collection, processing, and dissemination of information by individuals and organizations comprise an important human dynamic, which is an integral part of the information infrastructure. A news broadcast on CNN, a diplomatic communique, and a military message ordering the execution of an operation all depend on the global information infrastructure. The information infrastructure has been assigned three categories global information infrastructure (GII), national information infrastructure (NII), and defense information infrastructure (DII).

In actuality the GII, NII and DII labels are misleading as there are few distinct boundaries in the information environment. The DII, NII, and GII are inextricably intertwined, a trend that will only intensify with the continuous application of rapidly advancing technology. Again, no ordered structure is readily apparent on which to base a taxonomy.

If information warfare targeting and information warfare defense are shaped by particular information dependent processes then perhaps ordering information dependent processes will lead to a structure. However, only a little reflection leads to the conclusion that there are an infinite variety and scope of information dependent processes. Clearly, there is no "ordered system" that will tie these potential processes together, other than the shared characteristic of depending on information. Enumerating information dependent processes will not yield a taxonomy.

What of the methods of information warfare? Consider that attacks and defenses may involve:

From the above it follows that at the highest level information dependency can be partitioned into two elements: one, the availability of information needed by the process; and two, the integrity of information used in the process. Some would add a third element, the confidentiality of information, as it is an important factor in many civil and military information dependent processes. In the following derivation all three are addressed. Note that this trial taxonomy is irrespective of the offensive or defensive actions that may be undertaken to achieve or defend against these conditions it is just a structure for information warfare.

A top-level taxonomy for information warfare

Availability of information or information services

   Loss of information

Detected on occurrence

Detected after n* units of time

Undetected

   Delay in receipt of information

Detected on occurrence

Detected after n units of time

Undetected

   Loss of an information service

Detected on occurrence

Detected after n units of time

Undetected

   Delay in an information service

Detected on occurrence

Detected after n units of time

Undetected

Integrity of information

   Unauthorized change in data

Detected on occurrence

Detected after n units of time

Undetected

   Insertion of false data

   From a correct source

Detected on occurrence

Detected after n units of time

Undetected

   From an incorrect source

Detected on occurrence

Detected after n units of time

Undetected

Confidentiality of information

   Compromise detected on occurrence

   Compromise detected after n units of time

   Compromise undetected

*The unit of time can vary from microseconds to years.
The criticality of n is determined by the information
dependent process in each particular case

Although only at three levels of complexity this sample taxonomy rapidly becomes unwieldy. Complexity grows at the next level as each of these conditions can be the result of accident or caused by deliberate intent. In many cases it may be impossible to determine which led to the condition. At the next level deliberate intent can be carried out by an exterior actor, an insider with authorized access to the information or information services use in an information dependent process, or by both internal and external actors may be working in concert. Then there is the factor of time. If the failure was detected only after n units of time had elapsed, the affects that matter cannot be generalized but rather are unique to a specific information dependent process. The introduction of process-dependent timing takes us back to the earlier infinite variety of processes which has already been rejected as a basis for a taxonomy.

But to press on with this sample taxonomy, we recognize that all of these events can be arrayed in multiple sequences and combinations. There are an infinite combination and permutation of such attack methods and countering defenses available for application within the intertwined DII/NII/GII environment. Thus, an attempt to add successive layers to the taxonomy sketched out above would explode into incomprehensible complexity. Each element of data; each bit and byte of software; each device, whether in a computer at an end-node or along a communication path; each waveform; and each person with access to any of the components would have to be mapped onto the structure.

It is just this complexity that is large part of the challenge facing the defender: he cannot know or protect against all the possible means of attack to succeed, the attacker needs only to know one weakness that the defender has left unprotected or have a weapon that can breech one point in defense. This is the imperative for risk management, resilient systems, and robust recovery capabilities. Again, although a top-level information warfare taxonomy can be sketched, it does not scale to a useful construct. (See the last page of this Appendix for a footnote on complexity.)

Now the principle reason an information warfare taxonomy is a desired objective is that it adds precision to communication. Although the simple taxonomy sketched above does not meet that goal, a workable alternative is proposed that can be inserted into existing reporting structures. The development of this alternative to a taxonomy has the benefit that it builds on existing models from the Joint Publications System.

Joint Publication 1-03, "Joint Reporting Structure (JRS)," establishes a standard reporting vocabulary for the Department of Defense. Joint Publication 1-03.3 establishes the "Status of Resources and Training System (SORTS)", and provides the general provisions and detailed instructions for collecting and preparing data on units of the U.S. Armed Forces and selected foreign and international organizations. In practice, the utility of SORTS is not optimum because of the timeliness and quality of data submitted. Whether incorporated in SORTS or a stand-alone method, an information warfare SORTS-like reporting scheme is needed.

SORTS functions as the following:

a. Central Registry of All Operational Units in the U.S. Armed Forces. SORTS is the single, automated reporting system within the Department of Defense that provides the National Command Authorities (NCA) and the Chairman of the Joint Chiefs of Staff with authoritative identification, location, assignment, personnel, and equipment data for the registered units and organizations of the U.S. Armed Forces, Defense agencies, and certain foreign and international organizations involved in operations with U.S. Armed Forces. The composite registry of all units is maintained by the Joint Staff. After initial registration, SORTS is designed to receive reports by exception when changes occur.

b. Repository of Resource Status of Selected Units. For selected registered units, SORTS also provides the condition and level of resources and training. This includes the unit commander's assessment of how resources and training levels will affect the unit's ability to undertake its wartime mission. Units report by exception within 24 hours of a change or as directed by the Chairman of the Joint Chiefs of Staff. If no change in unit status occurs within 30 days of report submission, units submit a validation report.

SORTS contains provisions for reporting various readiness items:

(a) Overall C-Level (OVERALL) Set. Data in this set include the overall C-Level for the unit and the codes for primary, secondary, and tertiary degradation reasons. The overall readiness showing how well the unit meets prescribed levels of personnel, equipment, and training for the wartime mission for which the unit has been organized or designed is ranked in descending order from C-1 to C-5:

C- l. The unit possesses the required resources and is trained to undertake the full wartime mission(s) for which it is organized or designed. The resource and training area status will neither limit flexibility in methods for mission accomplishment nor increase vulnerability of unit personnel and equipment. The unit does not require any compensation for deficiencies.

C-2. The unit possesses the required resources and is trained to undertake most of the wartime mission(s) for which it is organized or designed. The resource and training area status may cause isolated decreases in flexibility in methods for mission accomplishment but will not increase vulnerability of the unit under most envisioned operational scenarios. The unit would require little, if any, compensation for deficiencies.

C-3. The unit possesses the required resources and is trained to undertake many, but not all portions of the wartime mission(s) for which it is organized or designed. The resource and training area status will result in significant decreases in flexibility for mission accomplishment and will increase vulnerability of the unit under many, but not all, envisioned operational scenarios. The unit would require significant compensation for deficiencies.

C4. The unit requires additional resources or training to undertake its wartime mission(s), but it may be directed to undertake portions of its wartime mission(s) with resources on hand.

C-5. The unit is undergoing a Service-directed resource action and is not prepared, at this time, to undertake the wartime mission(s)) for which it is organized or designed.

(b) Personnel Level (PERSONEL) Set. Data in this set include the personnel level (P-level) and a code for the primary reason for degradation in the personnel area.

(c) Equipment and Supplies On Hand Level (EQSUPPLY) Set. Data in this set include the equipment and supplies on hand level (S-level) and a code for the primary reason for degradation in the equipment and supplies on hand area.

(d) Equipment Condition Level (EQCONDN) Set. Data in this set include the equipment condition level (R-level) and a code for the primary reason for degradation in the equipment condition area.

(e) Training Level (TRAINING) Set. Data in this set include the training level (T-level) and a code for the primary reason for degradation in the training area

(f) Forecasted Category Level (FORECAST) Set. Data in this set include the forecasted C-level for the unit and the date the unit expects to attain that C-level.

(g) Category Level Limitation (CATLIMIT) Set. Data in this set include the imposed maximum C-level for the unit, if any, and the primary resource area causing the limitation.

An additional category should be added to SORTS specifying at what level of assessment scenario the unit is prepared to operate and how this preparedness was assessed using the terminology described earlier.

Joint Pub 1-03.10, "JRS Communications Status," provides for the Defense Information Systems Agency to provide near-real-time status information on a serious degradation of the Defense Communication System (DCS) via a Communications Spot Report and to provide a summary of significant status information on the DCS via a daily Communications Status Report.

These reports should be expanded to include information systems and information services. Further, these reports should be used by the military departments, services, combat support agencies and the CINCs to report the status of information systems and services.

Joint Pub 1-03.31, "Preparedness Evaluation System," establishes the CINCs Preparedness Assessment Report (CSPAR). These report provide a biennial appraisal of the preparedness of the unified and specified commands to accomplish Joint Strategic Capability Plan tasks (both supporting and supported) within the constraints of the total apportioned force (Active and Reserve). In the CSPAR, each CINC identifies overall strengths and significant deficiencies affecting the command's ability to carry out assigned missions and execute the plans produced during the most recent planning cycle. In submitting the CSPAR, CINCs are reporting on their ability to accomplish a specific task using available capabilities.

The CINCs should be required to include an assessment of their ability to carry out assigned missions at the appropriate assessment scenario level and indicate the process used to determine preparedness.

Joint Pub 1-03 32 1. "Combat Agency Assessment System," sets forth the guidelines and procedures for operating the Combat Support Agency Assessment System (CSAAS), a uniform system for reporting to the Secretary of Defense, the commanders of the unified and specified commands (CINCs), and the Secretaries of the Military Departments concerning readiness of each combat support agency to perform with respect to a war or threat to national security.

Chairman, Joint Chief of Staff (CJCS)-sponsored exercises provide the principal means of on-site evaluation of agency responsiveness in reacting to National Command Authority decisions and CINC warfighting requirements. In the event no such exercises are scheduled during the first two quarters of even-numbered fiscal years, Joint Staff observers conduct independent site visits to each of the combat support agencies. Although the CSPAR is the principal means for the combatant commands to assess agency support, Joint Staff observers may also visit combatant command headquarters to discuss overall support, agency supporting plans, and ongoing efforts to improve shortfalls.

These reports should be modified to include an annual assessment of the preparedness of the combat support agencies, at a specified assessment level to carry out their mission. The current two year schedule currently followed in assessing the readiness of combat support agencies is not realistic in an age of information warfare. The information dependent processes of these agencies are directly tied to the ability to mobilize, deploy and sustain the forces. Currently, this is an unknown in the age of information warfare.

Joint Pub 3-10.1, "Joint Tactics, Techniques, and Procedures for Base Defense," categorizes threats to bases in the rear area by the levels of defense required to counter them. Emphasis on specific base defense and security measures may depend on the anticipated threat level. (These threat levels are discussed in detail in Joint Pub 3-10.)

a. Level I threats can be defeated by base or base duster self-defense measures.

b. Level II threats are beyond base or base cluster self-defense capabilities but can be defeated by response forces, normally military police (MP) units assigned to area commands with supporting fires.

c. Level III threats necessitate the command decision to commit a Theater Contingency Force. Level III threats, in addition to major ground attacks, include major attacks by aircraft and theater missiles armed with conventional weapons or nuclear, biological and chemical (NBC) weapons.

The threat to bases in the rear area should be modified to include information warfare attacks.

Joint Pub 3-10.1 also spells out Threat Conditions and Responses and states that in combating terrorism, bases should use common terrorist threat conditions (THREATCONs), each with its specific security measures and required responses.

Threat assessments are used to determine threat levels, to implement security decisions, and to establish awareness and resident training requirements. Threat levels are determined by an assessment of the situation using the following six terrorist threat factors:

(1) Existence. A terrorist group is present, assessed to be present, or able to gain access to a given country or locale.

(2) Capability. The acquired, assessed, or demonstrated level of capability to conduct terrorist attacks.

(3) Intentions. Recent demonstrated anti-U.S. terrorist activity, or stated or assessed intent to conduct such activity.

(4) History. Demonstrated terrorist activity over time.

(5) Targeting. Current credible information on activity indicative of preparations for specific terrorist operations.

(6) Security Environment. The internal political and security considerations that impact on the capability of terrorist elements to implement their intentions.

The severity of the terrorist threat is indicated by the designated threat level, assigned through. analysis of the above threat assessment factors. Threat levels, and associated factors, are:

(1) Critical. Factors of existence, capability, and targeting must be present. History and intentions may or may not be present.

(2) High. Factors of existence, capability, history and intentions must be present

(3) Medium. Factors of existence, capability, and history must be present. Intentions may or may not be present.

(4) Low. Existence and capability must be present. History may or may not be present.

(5) Negligible. Existence and/or capability may or may not be present.

The terrorist threat level is one of several factors used in the determination of terrorist THREAT CON. Factors that enter into the decision to assign a particular THREATCON and its associated measures include threat, target vulnerability, criticality of assets, security resource availability, impact on operations and morale, damage control, recovery procedures, international regulations, and planned U.S. Government actions that could trigger a terrorist response.

The terrorist THREATCON system provides a common framework to facilitate inter-Service coordination, support U.S. military anti-terrorist activities, and enhance overall DoD implementation of U.S. Government anti-terrorist policy. THREATCONs are described below:

(1)THREATCON NORMAL. Applies when a general threat possible terrorist activity exists, but the threat warrants a routine security posture.

(2) THREATCON ALPHA. Applies when there is a general threat of terrorist activity against personnel and installations, the exact nature and extent of which are unpredictable and circumstances do not justify full implementation of THREATCON BRAVO measures. However, base defense forces may have to implement selected measures from higher THREATCONs based on intelligence received. Base defense forces must be able to maintain the measures in this THREATCON indefinitely.

(3) THREATCON BRAVO. Applies when an increased and more predictable threat of terrorist activity exists. Base defense forces must be able to maintain the measures of this THREATCON for weeks without causing undue hardship, without affecting operational capability, and without aggravating relations with local authorities.

(4) THREATCON CHARLIE. Applies when an incident occurs or when intelligence indicates an imminent terrorist action against U.S. bases and personnel. Implementation of measures in the THREATCON for more than a short period probably will create hardship and affect peacetime activities of the unit and its personnel. Sustaining this posture for an extended period probably will require augmentation.

(5) THREATCON DELTA. Applied in the immediate area where a terrorist attack has occurred or when intelligence has been received that terrorist action against a specific location is likely. Normally, this THREATCON is declared as a localized warning.

The description of threat levels, threat assessments, severity of threat, and threat condition found in Joint Pub 3-10.1 is a good model for information warfare defense preparation, assessment, and warning.

Finally, Joint Pub 3-54, "Joint Doctrine for Operations Security," Change 1, Appendix E, outlines procedures for Operations Security (OPSEC). These surveys in general:

a. Thoroughly examine an operation or activity to determine if adequate protection from adversary intelligence exploitation exists.

b. Check on how effective the OPSEC measures the operation or activity being surveyed in protecting protect its critical information.

c. Cannot be conducted until after an operation or activity has at least identified its critical information for without a basis of identified critical information, there can be no specific determination that actual OPSEC vulnerabilities exist. (This is also true in information warfare.)

Each OPSEC survey is unique. Surveys differ in the nature of the information requiring protection, the adversary collection capability, and the environment of the activity to be surveyed

a. In combat, a survey's emphasis must be on identifying operational indicators that signal friendly intentions, capabilities, and/or limitations and that will permit the adversary to counter friendly operations or reduce their effectiveness.

b. In peacetime, surveys generally seek to correct weaknesses that disclose information useful to potential adversaries in the event of future conflict. Many activities, such as operational unit tests, practice alerts, and major exercises, are of great interest to a potential adversary because they provide insight into friendly readiness, plans, crisis procedures, and C2 capabilities that enhance that adversary's long-range planning.

OPSEC Surveys are not Security Inspections:

a. OPSEC surveys are different from security evaluations or inspections. A survey attempts to produce an adversary's view of the operation or activity being surveyed. A security inspection seeks to determine if an organization is in compliance with the appropriate security directives and regulations.

b. Surveys are always planned and conducted by the organization responsible for the operation or activity that is to be surveyed. Inspections may be conducted without warning by outside organizations.

c. OPSEC surveys are not a check on the effectiveness of an organization's security programs or its adherence to security directives. In fact, survey teams will be seeking to determine if any security measures are creating OPSEC indicators.

d. Surveys are not punitive inspections, and no grades or evaluations are awarded as a result of them. Surveys are not designed to inspect individuals but are employed to evaluate operations and systems used to accomplish missions.

e. To obtain accurate information, a survey team must depend on positive cooperation and assistance from the organizations participating in the operation or activity being surveyed. If team members must question individuals, observe activities, and otherwise gather data during the course of the survey, they will inevitably appear as inspectors, unless this non-punitive objective is made clear.

f. Although reports are not provided to the surveyed unit's higher headquarters, OPSEC survey teams may forward to senior officials the lessons learned on a non-attribution basis. The senior officials responsible for the operation or activity then decide to further disseminate the survey's lessons learned.

There are two basic kinds of OPSEC surveys: command and formal.

a. A command survey is performed using only command personnel and . on events within the particular command

b. A formal survey requires a survey team composed of members from inside and outside the command and will normally cross command lines (after prior coordination) to survey supporting and related operations and activities.

c. Both types of surveys follow the same basic sequence and procedures.

Although Joint Pub 3-54 is scheduled to be rewritten, it is quoted extensively as another possible model for conducting information warfare assessments. The assessment methodology cited at the beginning of the annex should yield more rigorous conclusions.

By adopting concepts from each of the Joint Pub sources cited above a standard vocabulary of status reporting, tied to specific information dependent processes, can be developed for information warfare. Such an assessment and reporting system should be developed that stands on its own for use in civil agencies and the commercial sector. Within the Department of Defense this may be more easily achieved by making suitable modification of the several portions of the Joint Reporting System.

In the case of information warfare, as in the terrorism example above, a range of standardized threat scenarios should be promulgated for use in conducting preparedness surveys, as standardized assessment conditions for planning purposes, and a set of standardized threat warnings or THREATCONS, if warning is available.

Whatever schema is used to evaluate the operational readiness of information dependent processes and activities, it must be timely and reflect the current state of the security policy being implemented, the supporting infrastructures (computers, communications, electricity and other supporting utilities) and the training status of the personnel, both systems administrators and users of information and information systems.


Complexity Footnote:

A military example of how the complexity builds is found in command and control warfare (C2W). The U.S. military defines C2W as an application of information warfare in military operations.

The execution of C2W involves the integrated use of some or all of the tools of psychological operations (PSYOP), military deception, operations security (OPSEC), electronic warfare (EW), and physical destruction, mutually supported by intelligence, to deny information to, influence, degrade, or destroy adversary C2 capabilities while protecting friendly C2 capabilities against such actions. Again, these are just means to carry out information warfare in a particular military environment.

Defensive tools called out in Joint Pub 6.0, Doctrine for C4 Systems Support to Joint Operations, include:

(1) Physical security of facilities,

(2) Personnel security of individuals authorized access to systems,

(3) Operations security (OPSEC) procedures and techniques protecting operational employment of C4 system components,

(4) Deception, deceiving the adversary about specific system configuration, operational employment, and degree of component importance to mission accomplishment,

(5) Low probability of intercept (LPI) and low probability of detection (LPD) capabilities and techniques designed to defeat adversary attempts to detect and exploit transmission media

(6) Emissions control procedures designed to support OPSEC and LPI/LPD objective,

(7) Transmission security capabilities designed to support OPSEC and LPI/ LPD objectives,

(8) Communications security (COMSEC) capabilities to protect information transiting terminal devices and transmission media from adversary exploitation,

(9) Computer security capabilities to protect information at rest, being processed, and transitioning terminal devices, switches, networks, and control systems from intrusion, damage, and exploitation,

(10) System design and configuration control (e.g., protected distribution systems, protection from compromising emanation (TEMPEST)) to mitigate the impact of information technology vulnerabilities, and

(11) Identifying technological and procedural vulnerability analysis and assessment programs.

To this list can be added non-repudiation, identification and authorization, end-user use of encryption services, transmission encryption, replication, and a host of other techniques to protect various elements of the information infrastructure. As in the case of C2W, these are tools and in themselves, they are not information warfare.