DRAFT

 

 

 

A DECISION-CENTERED APPROACH TO STORYBOARDING ANTI-AIR WARFARE INTERFACES

 

TASK 3

TECHNICAL REPORT

Prepared by:

Thomas E. Miller

Steve P. Wolf

Marvin L. Thordsen

Gary Klein

 

Klein Associates Inc.

582 E. Dayton-Yellow Springs Road

Fairborn, OH 45324-3987

 

Prepared for:

Naval Command, Control and Ocean Surveillance Center

Research, Development, Test, and Evaluation Division

271 Catalina Boulevard

San Diego, CA 92152-5000

 

Date Submitted: July 8, 1992

TASK 3

Keywords:

decision making

naturalistic decision making

human-computer interface

decision support systems

cognitive task analysis

cognitive systems engineering

anti-air warfare

situation assessment

AEGIS combat system

decision aiding

 

Abstract

The research described in this paper was performed as part of the Tactical Decision making Under Stress program (TADMUS). The aim of this program is to improve the decision-making performance of Anti-Air Was (AAW) crew members in a Combat Information Center (CIC) under conditions of stress. Three tasks were performed as part of this research program. In the first task we identified the key decisions made by the AAW crew members, and the range of information needed to make these decisions. In the second task we surveyed the different decision strategies that have been identified and studied, and defined the boundary conditions for each. this allowed us to anticipate which decision strategies would be used for a given type of task. In the third major task we developed AAW storyboards, using our understanding of the goal structures and decision strategies used by operators in this environment. The three tasks are interrelated in that the critical decisions identified in Task 1 are categorized in accord with the decision strategies found in Task 2. These decision strategies served as the guide to developing interface concepts and storyboards of a DSS in task 3. This report describes the results of task 3 and includes storyboards of a DSS for a CIC.

 

 


 

 

Table of Contents

Program Overview

A Naturalistic Decision-Making (NDM) Approach

The Harassing F-4s: An Example Incident

Methodology

Identify Tasks and Goal States

Examine Common Goal States Across Incidents

Generate Interface Support Techniques

Build Storyboards of Interface Concepts

Storyboards

List of Figures

Figure 1. The Harassing F-4's: Decision Flow Diagram

Figure 2. Data Transformation and Analysis

List of Tables

Table 1. Harassing F-4’s: Representational Sheets.

Table 2. Incident 1: The Harassing F-4s.

Table 3. Primary Goal States Identified in the Critical Incidents.

Table 4. Frequency of Goal States

Table 5. Cue Inventory for the "Goal Determine Intent"

Table 6. Incident Aids.

Table 7. List of Proposed Enhancements.

 


 

Introduction

This report presents the findings of our studies investigating how experienced personnel make decisions about anti-air warfare in the AEGIS Combat Information Center (CIC). The work reported here comprised the third of three tasks that we accomplished under a program sponsored by the Office of Naval Technology, called Tactical Decision Making Under Stress (TADMUS).

TADMUS was designed to learn how Naval officers handle very difficult decisions under conditions such as time pressure and uncertainty. Military incidents, such as the ones involving the USS Stark and the USS Vincennes, have focused attention on the need for effective decisionmaking under stressful conditions. The catastrophic costs of these incidents dictate that improved training and decision support be provided to help the tactical decisionmaker in these highly charged and confusing situations.

Prior to TADMUS, the research emphasis had been on high intensity combat conditions. TADMUS was directed at low intensity conflict (LIC), including high degrees of ambiguity about the nature of a threat, and the intent of a track. The use of AEGIS cruisers in the Persian Gulf during the Iran-Iraq was an example of this. AEGIS cruisers were designed for blue-water operations, yet they were needed in the Gulf, within very narrow confines, and they lacked some important features for self-defense.

TADMUS is a program aimed at understanding how officers make decisions in a LIC environment, in order to help either with better training of teams and individuals, or with the design of better human-computer interfaces (HCIs) or decision support systems (DSSs). Klein Associates began work in support of TADMUS in September, 1990. In addition, two Navy laboratories are participating in the TADMUS program. The Naval Training Systems Center (NTSC, Orlando, FL) is primarily concerned with the development of training and simulation principles to counteract stress. NCCOSC is primarily concerned with the development of decision support principles and display principles for decision support systems. These sponsoring agencies established the scope of the TADMUS program. They directed the contractors to study decisionmaking within the anti-air warfare (AAW) area of the AEGIS CIC as it conducts operations in a low intensity conflict.

Our intent was to find ways of designing HCIs and DSSs to improve decisionmaking, building on our past work in Naturalistic Decision Making (NDM) (e.g., Klein, 1989). Previous research on classical, generally analytical, decision strategies has not yielded useful insights for developing better systems for this environment. The question driving this effort was whether a naturalistic decision perspective would do any better.

Our work consisted of three tasks. In Task 1, we conducted interviews with AEGIS commanders and Anti-Air Warfare officers, to study the way they make decisions. The results are described in a separate report (Task 1 Technical Report: Kaempf, Wolf, Thordsen, & Klein, 1992). In Task 2, we surveyed the field of classical and naturalistic decision strategies, to see if there are useful ideas to be incorporated into TADMUS. The results of this task are described in a separate report ( Task 2 Technical Report: Zsambok, Beach, & Klein, 1992). The third task was to draw on both of these efforts to generate a decision-centered approach to designing interfaces and system supports. Task 3, and the storyboards we developed, are described in the present report. Task 4 is an overview report of the work conducted in the first three tasks (Klein, 1992).

This report is organized into six sections. Following this overview, we describe a naturalistic approach to decisionmaking in section two. Section three gives an example of the data we collected using the Critical Decision Method (CDM). The example will be referenced throughout the report. The forth section describes our methodology for generating interface concepts and storyboards of the concepts. Section five is an appraisal of our methods and the sixth section contains an extensive set of storyboards that demonstrate how our display concepts could be implemented in an AEGIS CIC.

A Naturalistic Decision-Making (NDM) Approach

We assume in our NDM approach that by understanding the thinking processes of key experienced decisionmakers, we will be able to assist designers with the configuration of HCIs and DSSs. In order to study decisionmaking in the CIC, we used our Critical Decision method (CDM) whereby non-routine incidents are recounted by experienced personnel. We have found in previous work (Klein, 1989) that expertise emerges most clearly during non-routine tasks. Further, by studying non-routine tasks, we develop an understanding of the breadth of CIC tasks and the different contexts in which CIC personnel must make decisions.

The analysis of 14 critical incidents in Task 1 (Kaempf, Wolf, Thordsen, & Klein, 1992) concluded that the recognition of situational dynamics is one of the key drivers to the selection of a course of action (COA). The Recognition Primed-Decision model (RPD) provides a good account of this decisionmaking; the model emphasizes the recognition of familiar situations. The RPD model describes how decisionmakers can adopt reasonable COAs without having to compare alternatives, and this is what we observed in the CIC.

Classical decision strategies were not observed in the CIC environment, as reported in the Task 1 report. The naturalistic constraints of the CIC, such as time pressure and uncertainty, obviate several classical strategies that require time-consuming analysis.

Given the dynamics and time pressure of the CIC, crew members seek a satisfactory COA, not the optimal COA. In the 14 incidents we examined, non-comparative strategies and mental simulation appeared to be effective in enabling crew members to use their experience to adopt a reasonable COA. By using non-comparative approaches, decisionmakers are not faced with the task of contrasting multiple options; they only need to consider one option at a time until a workable solution is found.

The importance of situation assessment (SA) has a significant impact on how the DSS interface should be built. The RPD model suggests that a COA will be evident if the situation is understood. Therefore, the design of CIC interfaces should focus primarily on supporting SA and not on COAs.

Kaempf, Wolf, Thordsen, and Klein (1992) reported that feature matching and story generation strategies accounted for approximately 98% of the diagnostic decisions coded. Of the two, feature matching was the dominant process for assessing situations. It is therefore imperative that this strategy be supported in the CIC DSS. A feature-matching strategy can be supported through an interface that makes situational data and the relations among data readily available to the CIC crew in a format that is understandable to them.

The finding that CIC commanders use story building as a means to assess the situation also has implications for system design. The interfaces should enable the decisionmaker to build a story. For example, Task 1 reported that a storybuilding strategy was used approximately 11% of the time. Further, instances where story building was observed included some of the most difficult situations to assess. The five toughest incidents (defined in terms of ambiguity and personal threat) accounted for seven of the eleven cases of story building (see Kaempf, Wolf, Thordsen, and Klein, 1992) In order for an interface to support story building, the CIC crew should have access to cumulative ("historical") information, to include important events, sequences of events, and changes in events. These historical displays would enable a commander to review the development of an incident. The display may need to show critical events, such as when attack radars were used, rather than just showing heading and speed. The suggestion to present historical information is consistent with our knowledge of naturalistic decision strategies and the importance of situation assessment.

The Harassing F-4s: An Example Incident

In order to give the reader a better sense of the data we collected and examined, the following is a description of one of the incidents. This incident will be referred to throughout this report. Figure 1 contains a decision flow diagram, a schematic which summarizes the entire incident.

An AEGIS cruiser was stationed in the Southern Persian Gulf. Its mission was to escort a flagship through the Straits of Hormuz, to pay a port call at Muscat. Because the flagship is fairly defenseless (it is quite slow and can not "run" from a potential threat) it requires an escort. The problems of defending the ship merit moving at night but this was an unusual daylight transit. The cruiser was at General Quarters (GQ), a condition wherein all battle stations are manned. This is typical for traveling through the Straits. There had been a recent increase in jet fighter activity from Bandur Abbas airport in Iran. The activity was usually in the form of circular patrols (usually two sections of 2 aircraft or a section every 30 minutes), apparently on training flights. These maritime patrols would either fly south following the coast towards the Sea of Arabia or turn and go up north and east along the Iranian coast, make a loop, and come back.

(1) Halfway through the escort mission, the cruiser received word of an imminent launch of two Iranian fighters. There was a brief radar emission detected early after takeoff, which enabled the cruiser's Electronic Warfare officer to identify the aircraft as F-4s. It was typical for the Iranians to start with their radar and then shut it off. The U.S. forces had been briefed that the Iranians were having trouble maintaining the sensitive radar systems, so it was safest for them to not use these systems too often.

(2) Immediately after take-off the aircraft began orbiting the airport, at the end of the runway. This was atypical because the training and patrol missions normally went north or south. At this time the F-4s were just outside their typical weapons release range.

(3) Next, the cruiser picked up radar emissions showing that the lead aircraft was in search mode. This indicated that the fighter was attempting to get an accurate fix on the ships.

(4) As the lead aircraft swung around with its nose pointed at the cruiser, it went to its fire control radar (this is done to allow a weapon system to acquire a firing solution). This, again, was typical. Aside from the initial circling of the airport, this patrol seemed to be following the expected pattern.

(5) One or two orbits later, both aircraft were in acquisition mode. Previously, the aircraft would turn their radars off during the back part of these orbits, yet this time they left them on. This is an important detail. As mentioned earlier, the Iranians were having trouble with their radar, so it was unusual that they would use them for such a long period. In one of the aircraft, after the orbit carried it away and it broke lock, the pilot switched his radar back into search mode. This can be interpreted as a deliberate move by the pilot.

The cruiser sent a standard Military Air Distress warning to the aircraft. It also notified its superiors on the Coronado. The cruiser's Commanding Officer (CO) requested that equipment (the AN/SLQ-32 device) be used to break lock, in order to inform the F-4s that they were being monitored. The cruiser did not illuminate the aircraft. Instead, it only prepared to illuminate (illumination is a more hostile reaction and the CO did not wish to exacerbate the situation).

Consequently, the cruiser EW hit the aircraft with the active mode of the AN/SLQ-32 to break the track. This breaklock device can damage the aircraft’s avionics equipment due to its high output of energy.

(6) The CO also had noted that the lead aircraft had been flying in a wider circle, bringing it closer to the cruiser on the near part of its orbit. As the planes continued their orbit, the distance from the carrier to the fighter aircraft was reduced to well within a weapons release range. At that range, the F-4s could have fired a stand-off weapon, or have dropped down to the deck level of the ship to bore in.

The use of fire control radar by the F-4s is a hostile act, and according to the rules of engagement (ROE), the cruiser would have been within its right to engage the F-4s. The CO informed the Battle Force Commander of the situation. At this point the Anti-Air Warfare Coordinator (AAWC) asked the CO if he should set up for Detect to Engage. The detect to engage mode makes it easier to engage, reduces VAB actions, and nothing is detected outside the ship. The CO agreed with the recommendation.

The CO determined that if the F-4s broke orbit and came towards the cruiser, he would engage them. To this end, he prepared the semi-automatic mode, a preparatory step that would shorten response time. The CO told AAWC and MSS, "Be prepared to engage if they close."

(7) The F-4s made another circle, however, this time they did not use their fire control radar. The CO sensed that this was a sign of de-escalation and the tracks intent was only to harass him. Since the tracks had not used fire control, there was no need to break lock with the AN/SLQ32.

(8) After another orbit without using their fire control radar, the tracks abandoned their course, and flew away. Estimated time for the incident: 5 minutes.

Methodology

After reviewing critical incidents like the one just described, we developed a decision-centered approach to interface design that supports the decision making we found in real-world events. The decision-centered approach helps specify decision requirements for interfaces. These requirements are then used to generate interface concepts to support the decisions made, in this case within an AEGIS CIC. The interface concepts are represented as a set of storyboards that graphically show how the concepts could be implemented.

Our method for cognitive systems engineering using a decision-centered approach is based on the analysis of 14 critical incidents collected by using the CDM described in Klein, Calderwood and MacGregor (1989). In order to generate storyboards based on the data collected from these incidents, we went through the following 12 processes of analysis:

1. Acquire domain knowledge: Researchers learned about the AEGIS system and anti-air warfare.

2. Knowledge elicitation: Experts were interviewed to obtain information that served as the database for the analysis.

3. Extract incidents: The interview transcripts and notes were reviewed and information about the target incident was extracted.

4. Identify actions: Actions taken by the AAW team during the incident were identified.

5. Identify situation assessments: Shifts and elaborations of the team's situation assessments were identified.

6. Identify cues, factors, and processes: The cues and factors that contributed to the team's situation assessments were identified. In addition, we wrote brief descriptions of how the team constructed each situation assessment and how it arrived at each course of action (COA) taken.

7. Identify critical problems: The major problems addressed by the team were identified and the situation assessments (SA) and activities were grouped under the relevant problems.

8. Code SAs and COAs: The process or strategy used to determine each SA and to select each COA were identified.

9. Identify tasks and goal states: The tasks and goals of the team were characterized as high-level task descriptions and incident-independent goal states.

10. Examine goal states across incidents: Goal states were examined across incidents in order to determine which cues the team used in order to achieve that goal.

11. Generate interface support techniques: Specific display concepts were developed to provide the team with critical cues needed to achieve their goal.

12. Build storyboards of interface concepts: Storyboards were developed to demonstrate how the interface concepts could be implemented in an AEGIS CIC environment.

Processes 1-8 of the Cognitive Task Analysis are covered in detail by Kaempf, Wolf, Thordsen & Klein (1992) in the Task 1 report. The Task 1 report includes all 14 incidents, from which the Harassing F-4s incident was taken, and an analysis of each. The purpose of Task 1 was to examine critical incidents gathered from operational settings, and determine the decision processes or strategies used in CIC naturalistic environments. Since the efforts of Task 3 are so closely tied to the work done in Task 1, the coding process used in Task 1 will first be summarized below (analysis step 8).

The focus of this Task 3 report is on building from the results from Task 1 to generate interface design concepts for a DSS, and demonstrate these concepts by building storyboards. The work done in Task 3 starts with identifying tasks and goal states (analysis step number 9) and discusses the remaining methodology steps, leading to the development of storyboards (analysis step 12). The following sections discuss the analysis steps eight through twelve:

Code SAs and COAs,

Identify tasks and goal states,

Examine goal states across incidents,

Generate interface support techniques,

Build storyboards of interface concepts.

Code SAs and COAs

The analysis of operator tasks and goal states is based on the analyses done in Task 1, such as decision flow diagrams (e.g. Figure 1) and formatted representational sheets (Table 1). (See Kaempf, Wolf, Thordsen & Klein (1992) for a detailed discussion of the formatted representational sheets and how they were developed.) The formatted representational sheet shown in Table 1 is a data summary sheet that captures the harassing F-4 incident. The sequence of events in the incident are presented top to bottom in Table 1.

The seven columns in Table 1 present various aspects of the incident. The Cues column contains informational cues that were used by the operators to develop their awareness of the situation. Cues contained information collected from sources such as the primary system displays, inferences made from other cues, and information collected from communications with other ships or aircraft. Column two in Table 1 contains Factors that affected the operators decision making. These Factors varied from incident to incident, and included, for example, knowledge that operators had about Soviet tactics, AEGIS system capabilities and recent event in the area. Column three, Process, refers to the mechanisms used to generate the assessment of the situation (e.g., combining the cues and factors). This Process information is aligned with its corresponding level of Situation Assessment which is displayed in the fourth column. For example, in the Harassing F-4s incident in Table 1, the operator’s Situation Assessment (column 4) is that two aircraft have launched from Iran and they are tracks of interest because of recent hostilities with Iran. The Process used to arrive at this assessment (column 3) was to use the cue information from his display.

Columns five through seven describe courses of action that were initiated by the operator. Column five, another Process column, describes the operator's rationale for taking the course of action. The sixth column, Activity, is a specific description of what the CoA was. The seventh column, Function, describes the purpose of the CoA. Refer again to Table 1. Near the bottom of the page under the activity column is the CoA -- Informs the admiral and flagship of the situation. To the immediate left of this phrase, in the process column, is the rationale for "informing the admiral of the situation" -- it is Standard Operating Procedure (SOP). To the right of this phrase, in the function column, is the abbreviation R.M. (for resource management). Therefore, the function of this CoA was not to end the incident (a final CoA), or to prepare for a more complex action (a preparatory or "leg up" CoA), but to keep other resources apprised of the situation.

The incident description form in Table 1 captures the incident in chronological order, even though operators attempted to resolve multiple problems concurrently. In order to code the decision strategies operators used, Kaempf et. al. (1992) arranged the information in the incident description forms based on its function. The analysts reviewed the incidents to identify and describe the major problems that the AAW teams attempted to resolve in each incident. The SAs and corresponding activities were then grouped to produce a list of major problems that the team tried to resolve during the incidents, and the steps that they took. The resulting problem description for the Harassing F-4s incident is shown in Table 2. The problem is described in column one of Table 2, followed by a description of the SA or COA. The last three columns show how three analysts coded the decision making. This is a brief summary of the analysis done in Task 1. See Kaempf, Wolf, Thordsen & Klein (1992) for a detailed discussion of the methodology used to analyze the critical incidents.

Identify Tasks and Goal States

The work done for this Task 3 report builds on the incident description forms (e.g., Table 1) and the problem description forms (e.g., Table 2). Analysis step number 9 is to identify operator tasks and goal states. We examined the incident descriptions and the problem descriptions for each of the 14 incidents in order to define high-level task descriptions and incident-independent descriptions of the goals the CIC crew members were trying to achieve. We required incident independent descriptions of the CIC tasks and the goal states of the operators so that the descriptions could be used as coding categories across incidents.

For example, in the Harassing F-4s incident, the CIC crew detected new tracks (see Table 1 and 2) and had to determine whether the tracks were hostile. The general goal the crew was trying to achieve was labeled "determine intent," and can be found in several of the 14 incidents (to be summarized below). We defined 15 incident-independent goal states and tasks, which are listed in Table 3. Items A through G are operator goal states and items H through O are tasks.

 

Table 3.

Primary Goal States Identified in the Critical Incidents

Goals

CODE

A. Determine intent: CIC crew attempts to determine the intentions of a track, such as whether or not the track is hostile.

B. Recognition of a problem: crew tries to determine if they are faced with a potentially threatening situation.

C. Monitor on-going situation: the CIC crew monitors a situation to detect any changes in the situation.

D. Identify track: crew attempts to determine the identity (e.g. country of origin) of a track.

E. Allocate resources: the CIC crew attempts to allocate limited resources to deal with the current situation.

F. Trouble-shoot: crew tries to trouble shoot a system.

G. Determine location: CIC crew attempts to determine the location of a reported track.

Tasks

H. Take actions to avoid escalation: crew takes deliberate steps to avoid the escalation of an incident.

I. Take actions toward engaging track(s): crew takes preparatory steps needed to engage a track.

J. Prepare self-defense: crew takes steps toward self-defense, such as bringing up the CIWS

K. Conduct all-out engagement: crew actively engages a track with a weapon system.

L. Monitor tracks of interest: crew monitors a track which has some significance to the current situation.

M. Reset resources: the crew returns ship resources to pre-incident status.

N. Collect intelligence: CIC crew actively tries to collect information on a track.

O. Other: goals not coded in the above list.

 

The focus on NDM helped identify the goals, tasks and the key decisions in each of the 14 incidents. The goals and the decisions made to achieve the goals constitute the primary decision requirements that designers of a CIC need to take into account in order to support decisionmaking in a CIC environment.

Once we identified the 15 goal states across incidents in Table 3, the incidents were placed on a timeline to show the sequence of goal states and their duration. This sequence of goal states that for each of the 14 incidents are listed in Appendix A. For example, in the Harassing F-4s incident, the CIC crew entered the goal states and performed the tasks that follow:

CODE

A. Determine intent: The CIC crew had to determine whether the F-4 tracks had hostile intentions or not.

B. Recognition of a problem: The CIC crew recognized that they were faced with an atypical situation and would have to take actions to address the problem.

H. Actions to avoid escalation: The crew took actions to avert engaging the F-4s, such as using the Military Air Distress (MAD) warning and breaking the lock of the fire control radar.

I&A Actions towards engaging and Determine intent: The CIC crew took specific steps to prepare to engage the F-4s, such as bringing up the CIWS. At the same time, they continued to assess the intentions of the F-4s.

C. Monitor on-going situation: The crew continued to monitor the tracks even after they determined that their intentions were probably to harass the ships, not actually to fire on them.

Notice that operators can have multiple goals; at one point the CIC crew had the goal of taking actions to engage the F-4s and at the same time were trying to determine their intentions. This is represented in column (6) of the decision flow diagram (Figure 1, "Aircraft have been steadily widening their orbits."). Following column (6) down, we can see that the CO has decided that the F-4s are drawing too much attention to themselves. Since becoming so obviously visible to the AEGIS cruiser seemed unlikely if an attack were imminent, this behavior was interpreted as harassment rather than an intention to engage. However, if we follow column (6) to the right, we can see that the CO prepared for self defense in case his assessment of the situation was incorrect.

Table 4 is a summary of the frequency of the tasks and goals that were observed across the 14 incidents. The first and second columns list the goals from Table 3. The third and forth columns list the frequency with which the goals were observed and in how many different incidents they were observed, respectively.

Table 4.

Frequency of Goal States

Code Goal Freq. Different

Incidents

A Determine intent

 

7

 

5

 

B Recognition of problem

 

10

 

10

C Monitor on-going situation

 

9

 

D Identify track(s)

 

16

 

10

 

 

E Allocate resources

 

5

 

3

 

 

F Trouble-shoot

 

3

 

2

  

G Determine location

 

2

 

1

 

H Avoid escalation

 

5

 

4

 

I Toward engagement

 

10

 

6

 

J Prepare self-defense

 

7

 

5

K Engage track

 

11

 

4

 

L Monitor tracks of interest

 

2

 

2

M Reset resources

 

5

 

5

N Collect intelligence

 

2

 

1

 

 

O Other

 

5

 

3

 

Examine Common Goal States Across Incidents

We examined common goal states across all incidents in the next phase of analysis, step 10 in our process. The goal of this stage of data analysis is to identify the cues that are used to achieve a goal, independent of specific incidents. In order to elaborate on the components of each of the goals listed in Table 2, we identified the cues and patterns of cues that the CIC crew were sensitive to while they tried to achieve each of the goal states. We anchored this process by first focusing on the goal states of one incident, the Harassing F-4s. From this one incident, we later branched out to include the other incidents.

To illustrate how we performed this analysis, consider the goal "determine intent" within the Harassing F-4 example. The cues that the CIC crew attended to while they were trying to determine the intent of the F-4 tracks were gathered and listed in a cue inventory for the goal "determine intent." The resulting cue inventory for the Harassing F-4s incident is as follows:

Cue inventory:

Intelligence reports

Recent hostilities/activities

Presence of a new track

Course-intercept and circling

Range of tracks to ownship

Point of origin

Change in range of tracks to ownship

EW emissions

Change in course

Flight profiles

These are cues that were present in the critical incident. A similar cue inventory was developed for each of the remaining goal states in the Harassing F-4s incident. The decisions that system users must make, the strategies they invoke to make these decisions, and the cues necessary for making the decisions constitute the task decision requirements. Specifying the decision requirements for user tasks and goals is the basis for designing human-computer interfaces that support the decisions that are made in operational settings.

Once we had the cue inventory for each goal state, we then developed a method for combining instances of common decision requirements across incidents, the process for which is illustrated in Figure 2.

Figure 2.

Data Transformation and Analysis

[Figure 2 about here]

   

The incidents in Figure 2 are arranged in columns, with the incident number at the top and the sequence of goal states listed below. Thus, for incident #1, the Harassing F-4s, decision requirements A,B,H, I and C were entered (determine intent, recognition of a problem, actions to avoid escalation, actions towards engaging, monitor on-going situation). For incident 2, the sequence of decision requirements is A, D, E, F and D.

What we are interested in at this stage is moving from specific accounts of critical incidents, to an examination of decision requirements independent of any specific incident. We want to specify what cues and relationships among cues, across incidents, were used to achieve operator goals, such as determine intent. By representing incidents as a sequence of decision requirements, as in Figure 2, it is apparent which of the 14 incidents share common goals. For example, the goal determine intent (‘A’ in Figure 2) is present in all of the incidents shown in Figure 2. The goal of avoiding escalation (‘H’ in Figure 2 is present in incidents 1, 3, 4 and 14, but is not present in incidents 2 or 5.

The next step to specifying decision requirements was to develop an overall, cumulative cue inventory for each operator goal. To do this, each goal was considered separately, in the context of the 14 incidents where the goal was present. Cues and relationships among cues were added to the cumulative cue inventory if they were used by operators to achieve the goal. As an example, Table 5 shows the final cue inventory for the goal state determine intent, across all incidents in which it occurred.

Table 5 shows which cues were used, across all incidents, where operators were faced with determining the intentions of an unknown track. The first column shows the information type of the cue (information types are listed at the bottom of Table 5. For example, information type I means that the source of the cure was from Intelligence and Warnings. Information type T stands for a cue that was gathered from Track information, and so forth.

Table 5.

Cue Inventory for the "Goal Determine Intent"

Information Cue Frequency of cue causing

type SA shift

I Intel (1)

P Recent hostilities/activities (1) 1

T New track (1)

T Course (7) intercept, erratic, circling 3

T Range (7) 4

T Point of origin (3) 2

T Change in range (1)

A EW bearing (2) none (3) identify signature (1)

T Change in course (1)

P Knowledge of enemy tactics/weapon (2) 1

A Response to warnings (none) (1)

T Speed (1)

T Change in speed (1) 1

T Number of tracks (1)

T Altitude (1)

T IFF (1) 1

T Formation (1)

P Flight profiles (1)

T VAS (1)

 

Information types:

T Track info (13/27) P Profiles, Experience, Knowledge (3/4)

S Status (0/0) R Restrictions, Constraints (0/0)

M Match ups (0/0) I Intelligence & Warnings (1/1)

C Communication (0/0) A Actions by track(s) (2/7)

O Other (0/0)

The cue name is in the second column of Table 5, followed by the frequency the cue was observed across incidents. The cue called "point of origin" (sixth row in Table 5) was used on three separate occasions to determine the intentions of a track. Some of the cues in the cue inventory were coded in earlier phases of the analysis as being critical cues (see Kaempf, Wolf, Thordsen, and Klein, 1992). Critical cues are cues that are important enough to cause a shift in situation assessment, or a significant deepening in understanding the situation. Critical cues that contributed to determining the intention of a track are listed in the third column of Table 5. For example, the use of range information (row five in Table 5) was used to assess the intent of a track seven times across the 14 incidents. In four of these instances, range information was coded as critical to situation assessment.

The third source of information in Table 5 is the summary data at the bottom of the table. This information was used to organize the cue inventory into categories of cues. This gives some indication of cue importance. For example, of the 19 different cues listed in Table 5, 13 of them are types of track information (indicated by the ‘T’ in the left hand column). The total number of occurrences of track information cues (allowing for duplicates) is 27. This data is indicated at the bottom of Table 5 (e.g., Track info (13/27)).

The cue inventories for each goal state (listed in Table 4) are contained in Appendix B.

Generate Interface Support Techniques

Step 11 in our analysis process is to generate interface support techniques. Ultimately, our goal is to incorporate what we have learned from the 14 incidents into recommendations for CIC HCIs. With this in mind, we examined each of the incidents and specified features and aids that might have helped the operators while the incidents were taking place. We tried to imagine a display that would have specifically supported the decisions that were made in each incident, without concern for how the displays would perform in other circumstances.

For example, in five incidents we noted that it would have been beneficial if the system would have highlighted all available resources that would be adequate to handle a particular threat, a resource-threat match up feature. Such a feature, for example, would be helpful in ruling out certain resources that may at first appear appropriate, but are in fact inadequate for less obvious reasons (such as low fuel or aircraft weapons status).

During this process of display feature generation, no attempt was made to restrict the ideas generated. That is, an idea was not rejected because the current AEGIS equipment could not accommodate it, or because the implementation of such a feature was not immediately apparent. Our goal at this stage was to generate a broad list of ideas and concepts; evaluation would come later.

This list of display features is contained in Table 6. The feature name is listed in column one, followed by sub-levels (or different versions) of the incident aid. For instance, the history feature could be used to display the historical data on a track of interest, or the historical record of ownship. Column three gives a brief description of the incident aid, and column four shows the number of incidents where the aid may have been useful.

Table 6.

Potential Incident Aids

 

FEATURE

SUB-LEVELS

DESCRIPTION

# OF INCIDENTS

History

For Track

A visual record of a track's movements and actions.

3

 

 

For Self

A visual record of the movements and actions of ownship.

1

Vulnerability

Zone of control (dynamic)

 

An indication of open/vulnerable areas typically caused by the movement of CAP to resolve some problem.

3

 

 

 

Other responsibilities

An indication of the vulnerabilities of ships under the care of ownship.

2

Actions permitted

 

 

A description of the available actions that could be taken in response to a threat. Should be tied to ROE, i.e., list actions that are allowable under ROE.

1

 

Track of interest

 

 

A means of determining when a track becomes a factor, particularly in cases where a track carrying missile X will be able to reach you.

3

Typical profiles

For Tracks

 

An indication that the profile of a track is typical of some common pattern X. A simple example; if speed is above mach and altitude above Y thousand feet: this is probably a missile. Could use feature matching.

5

 

 

 

Base Rates

An indication of the typicality of certain events. For example, we haven't typically had to warn tracks; when we do, 75% of them have been foreign military.

1

Tripwires

Reminders

Built in reminders that limits have been crossed. E.g., An unknown is now within 75 miles of ownship or track has changed to an intercept course.

3

 

 

Recommendations

Recommendations that are commonly associated with certain critical situations. For example, an unknown is within 20 miles of ownship: Do you want CIWS up? Missiles on the rails?

4

ATC mode

 

 

A mode of operating when near airfields. In some cases, ownship takes responsibility for the direction of aircraft, but lacks a complete picture of all the aircraft leaving the airfield.

2

Story aid

 

 

A means of identifying tracks. This feature should allow the operator a method of testing and comparing alternative hypotheses about the identity and intent of tracks.

3

Diagnostic resources

 

 

This should highlight all the diagnostic resources at the disposal of ownship.

6

 

Track Info

Degree of threat

A system generated indication of the perceived level of threat posed by a particular track. The system should be able to easily provide its rationale for its assessment.

3

 

 

CRO info

All relevant track information (speed, altimeter, heading, range) should be located near the track and on-screen, not on the CRO.

1

 

 

Trend info

An indication of relevant trends about a track (particularly altitude, but also bearing, range, and speed).

4

 

 

EW info

EW information about particular tracks should flow more easily through the system.

1

Shared SA

Across platforms

Pertains to the need to share interpretations of ongoing events among friendly resources. Radio circuits already too overloaded.

2

 

 

Grid discrepancies

A means of resolving discrepancies among information sources by flagging where the incongruencies are.

2

Disconfirming

Discrepancies

In this case the user should be aided in discovering when his preset expectancies have been violated. For example, "This appears to be the raid": System: "It is smaller than you expected."

2

 

 

Absence of X means Y

This feature should highlight the inconsistencies in the user's interpretation of what is taking place. For example, if a missile had supposedly been launched from 20 miles away and three minutes have passed, the system should make you aware of the fact that the missile should have come by now.

2

Weapon system status

 

 

A feature to quickly indicate what weapons are available and what weapons are currently being employed. This should make use of doctrine in guiding the priortization of available weapons.

4

Common system status

 

 

An indication of all available information sources. For example, if your Link 11 is down, present alternative information sources.

3

Resource status

(e.g., fuel, weapons)

The status of various resource should be clear and easy to gain access to.

2

 

 

Allow projections

In cases where an asset is to be used to try to run an intercept, a feature is needed to ensure that the asset has the necessary resources.

1

Resource availability

By track

 

An indication of the capabilities of particular assets. For instance, click on an F-18 and see that it can provide recon information to X level, has Y ability to transfer that information, and Z weapons capability.

5

 

 

By system

There may be instances in which the user wishes to use a particular type of system and wants to know which assets have that system. For example, show what tracks have Sidewinder missiles.

1

Resource threat matchups

 

 

This feature should highlight how well your resources can handle incoming problems. The system could recommend appropriate threat-resource matchups using info like fuel and weapons status as well as distance.

6

 

Windows of Opportunity

 

 

Aids in timing of activities. For example, how long would it take to get aircraft X over here versus how long before the unknown track reaches my MEZ. "You have 3 minutes to begin the intercept: Else it will be a tail chase."

8

 

Track continuity

Maintain track integrity

A feature to keep a closer contact on tracks that are "tickling" the system. When tracks only enter the system briefly the user could be alerted that a track has disappeared.

1

 

 

Highlighting new tracks

Tied to the feature above. As new tracks enter the system, the user could declutter all tracks but those that have appeared in the last X seconds.

1

Feedback about actions taken

 

 

A visual indication of actions that have been taken. For example, if a track is being illuminated, brighten the luminance of that track on-screen.

3

Anti-tunnel vision

 

 

A feature to ensure that the user does not stay focused at too fine a grain for too long. E.g., you have been at 16 miles for 2 minutes.

1

Gaming out problems

Simulation

A more interactive mode in which the user can test out hypotheses and plans before implementation.

2

 

 

For single tracks

A prediction of track behavior based on what is has done and is doing. Tied to track history but with a predictive element.

1

 

 

Running vectors

A feature to ensure that the user is always able to easily provide exact vectors to CAP and other resources being assigned.

1

 

 

In addition to generating incident aids for the 14 incidents (Table 6), we also generated design ideas for the goal states listed in Table 4, which are incident-independent. This was accomplished by developing display concepts to present the critical cues that impacted each goal state. These critical cues were discussed in an earlier section. The critical cues are shown in Table 5 for the Harassing F-4s incident, and in Appendix B for the remaining 13 incidents. Again in this exercise, our goal was to generate as many display ideas as possible, without being constrained by interactions among display techniques, or by current technology. Some of the design issues we addressed were:

• How to identify ways to present moderately familiar cues and factors that would permit the user to recognize missing information, provide a basis for reassessment, and allow generation of a story to account for the evidence. For example, how can speed, change in speed, point of origin, course, EW, and a recent history of hostilities be represented in a fashion that permits the construction of an explanation for the data, but which also helps identify important missing information, such as IFF? The display should also not preclude other possible explanations for the evidence.

• How to identify ways to present information on the display so that it supports both situation assessment (SA) and taking a course of action (COA).

• How to identify ways to represent the relationships between some of the cues. For example, a change in speed may represent hostile intent if the aircraft is also on an intercept course with ownship.

• How to identify ways to represent the specific cues and factors to support feature-matching strategies. For example, how should altitude, speed, point of origin, and EW be represented to best allow the user to identify a track as a hostile F-4?

• How to identify ways to represent specific cues and factors on a display. For example, what is the best way to show the track's "point of origin?"

For each decision requirement across incidents, display ideas were generated for the most frequently used, and most critical cues. Once compiled, the final step for Task 3 was to storyboard selected incidents to demonstrate how displays might look with our ideas implemented.

Build Storyboards of Interface Concepts

The final step to decision-centered design is to develop storyboards of the interface concepts. Storyboarding is a modeling technique that is used to demonstrate, in a tangible form, how a design idea may perform when formally implemented. To demonstrate our initial DSS concepts, we have built storyboards of a composite incident. Storyboards depicting eight display features are contained in the last section of this report.

Through the course of our discussions, we have made reference to the decision strategies employed during the incidents. Through the efforts of Task 1 (Kaempf, Wolf, Thordsen & Klein, 1992) and Task 2 (Zsambok, Beach, & Klein, 1992), we concluded that the major decision strategies employed in a CIC environment are feature matching and story generation. A DSS that is built for a CIC should support these two decision strategies. For feature matching, the data must be presented in a fashion that allows the operator to use his/her recognitional abilities to identify, at a glance, the track of an F-4, for example. To support story building, the operator must be able to consider other sources of data in order to construct a story to explain a situation that is not recognizable as familiar. The operator must be able to examine, for example, events along the path of a track (e.g., when and where was fire control radar used). This sequence of events may help the operator construct a story. It should be noted that a DSS interface should be capable of supporting both strategies simultaneously since building a DSS that can accurately predict which strategy an operator will use at a given moment would be very error prone. Thus, in the interface concepts discussed below, we attempted to support both strategies. By taking such an approach, the operator’s decision making will be supported, regardless of whether they use a feature matching or a story building strategy.

 

We have developed eight major display enhancements to the current CIC HCI that we believe would support the two decision strategies identified in our analyses. The enhancements are listed in Table 7 and described, in turn, below.

Table 7.

List of Proposed Enhancements

Improvements in the Presentation of Information

(1) Changes in Symbology

(2) Track Information Box

Highlighting Critical Tracks

(3) Tracks of Interest

(4) Tripwires

Features for the Assessment of Tracks

(5) Improved Track History

(6) Track Identification

Facilitation of Actions to be Taken

(7) Improved Intercept Capability

(8) Weapon Release Ranges

(1) Changes in Symbology.

We recommend a major change in symbology. The number of icons in our storyboards is reduced to three, yet we are still able represent air, surface, and subsurface tracks. Different colors, rather than shapes, are used to distinguish friendly, enemy, and hostile tracks. A second key change is to move the vector line indicating heading and speed to the rear of the icon where it could be used to build a more extensive track history. Finally, the amount of color on the screen is greatly reduced. In the current system, the primary function of color is to distinguish between land and water. We believe that the distinction between land and water can still be made, but with considerably less color. Changes to the symbology can support feature matching and story generation in that more of the relevant data, and relationships among data, can be present on screen at the same time.

(2) Track Information Box

In many incidents, the same set of cues were consistently used in the identification and assessment of intent for air and surface tracks, whether the operator used feature matching or story generation. Therefore, we propose moving this set of cues directly onto the display where the user can have rapid access to them.

 

(3) Tracks of Interest

Certain tracks have identifiable characteristics that make them worth investigating. For example, unknown tracks that fly low and slow usually require the attention of the user. The Track of Interest feature automatically checks for tracks of this type and brings them to the user's attention by shading the icon representing it.

(4) Tripwires

The Tripwires feature is the control center for adjustments to key elements for

monitoring a given track: range, course, speed, and altitude. The Tripwires feature has both a broad and a close control function. The broad function (to be set prior to the beginning of one's watch) provides the mechanism for determining which tracks are identified as Tracks of Interest. The close control function is to be used during an evolving situation to look for sudden changes in tracks that are already suspect.

(5) Improved Track History

During our observations of the current system, we rarely saw users pulling up the history for a particular track. Yet in several instances, this information could have been important to the situation assessment. We propose that track History be made easier to use and to include slightly more information. By moving the vector stick to the rear of the track, the historical evolution of a track's behavior becomes clearer. We also propose to add an extra piece of information to Track History; electronic warfare emissions. In several cases, a CO or Tactical Action Officer (TAO) could easily identify the source of the emissions and the type of radar used. Tagging when and where EW activity occurs should improve the user's overall awareness of the track's intent.

This feature supports both feature matching and story generation. Feature matching is supported because the data used in a feature match is present in one place, on screen; story generation is supported because historical information, such as the sequence of events, is readily available on screen.

 

(6) Track Identification

During some of the incidents, users were able to employ a feature-matching strategy to tentatively identify tracks. The Track Identification feature takes the set of cues mentioned by the interviewees and compares them to a set of profiles to be constructed by an expert panel. The user can then view the evidence for and against these profiles to make a tentative identification. This feature takes advantage of both decision strategies. The DSS attempts to do a match for several different profiles. The results can be used by the operator to either verify a feature match, or generate ideas to build a story to explain the situation.

(7) Improved Intercept Capability

The trial intercept feature on the current system is heavily used by CIC crew members. However, it appears that it could be enhanced in several ways, based on our findings from the 14 incidents. First, the system could automatically nominate CAP or other resources best able to perform an intercept. This nomination should be based on a few critical factors such as range, fuel, and type of aircraft. A second enhancement is the addition of times to perform the intercept. Users need to know how much time they have before an alternative action must be taken. Finally, we propose that two trial intercept lines be used, one based on current speed and one at Buster speed. This also will assist the decisionmaker with timing issues. One way we envision that this feature would be used is to find out whether CAP could perform an intercept in the time available.

(8) Weapon Release Ranges

The final feature is the Weapon Release Range feature. In some incidents decisions could have been made more easily if information about a potential threat's lethality range had been available. We propose that lethality ranges be established for a typical set of adversaries and that the range be represented graphically. This feature should make engagement decisions clearer.

After reviewing these enhancements, one point should be clear; they are simple and there is no new information available that is not available somewhere on the existing system. But that is not the point. The above, simple, suggestions are based on an analysis of the decision-requirements of tasks operators perform in a CIC. The difference is in the packaging of the data and how much effort is needed to make the necessary information available in order to make a decision. The display ideas presented in the storyboards in the next section are based on the cues and relationships among cues that were used by the feature matching and story generation strategies.

 

Storyboards

This section contains the storyboards depicting the eight display features we have developed. The storyboards present the features in the context of a scenario to demonstrate how the feature works. Each storyboard contains a text explanation of the scenario and how the feature functions.

References

Andriole, S. J. (l989). Storyboard Prototyping: A new approach to user requirements analysis. QED Information Sciences, Inc. Wellesley, Mass.

Kaempf, G. L., Wolf, S., Thordsen, M. L., & Klein, G. (1992). Decisionmaking in the AEGIS combat information center. Fairborn, OH: Klein Associates Inc. Prepared under contract #N66001-90-C-6023 for NOCCSC, San Diego, CA.

Klein, G. A. (1989). Recognition-primed decisions. In W.B. Rouse (Ed.), Advances in Man-Machine System Research, 5, 47-92. Greenwich, CT: JAI Press, Inc.

Klein, G. (1992). Decisionmaking in complex military environment. Fairborn, OH:

Klein Associates Inc. Prepared under contract #N66001-90-C-6023 for NOCCSC,

San Diego, CA.

Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19(3),

462-472.

Noble, D. (1991). Development of a tool to aid recognition-primed decision making. Vienna, VA: Engineering Research Associates.

Zsambok, C. E., Beach, L.R. & Klein, G. (1992). A literature review of analytical and naturalistic decision making. Fairborn, OH: Klein Associates Inc. Prepared under contract

#N66001-90-C-6023 for NOCCSC, San Diego, CA.

Appendix A. Goal states identified from the 14 incidents.

GOAL LIST

 

Incident #1: Harassing F-4s

A Determine Intent

B Recognition of Problem (more than just a track of interest)

C Actions to avoid escalation

D+A Actions towards engaging + Determine Intent

E Monitor on-going situation

 

Incident #2: Wandering helicopter

E Monitor Tracks of Interest

B Recognition of problem (more than just a track of interest)

F+A Identify Track + Determine Intent

C Actions to avoid escalation

E Monitor on-going situation

 

Incident #3: Chain Saw

I Conduct/Evaluate/Monitor all-out engagement

I Conduct/Evaluate/Monitor all-out engagement

I Conduct/Evaluate/Monitor all-out engagement

I Conduct/Evaluate/Monitor all-out engagement

K Reset Resources

 

Incident #4: Maritime Patrol

B+F Recognition of problem (more than just a track of interest) + Identify track

O Follow Instructions/Standing Orders

H Self Defense (CIWS)

D Actions towards engaging

E Monitor on-going situation

C Actions to avoid escalation

C+F Actions to avoid escalation + Identify Track

 

 

Incident #5: Silkworm Site

B Recognition of problem

L Collect Intelligence

D Actions towards engagement

L Collect Intelligence

D Actions towards engaging

 

Incident #6: Le Combattant

F+H Identify Track + Self Defense

A Determine Intent

D Actions towards engaging

D Actions toward Engaging

H Self Defense

D Actions towards engaging

H Self Defense

D Actions toward engaging

G Resource allocation

E Monitor on-going situation

 

Incident #7: CAP to Feint

I Conduct/Evaluate/Monitor all-out engagement

F Identify Tracks

I Conduct/Evaluate/Monitor all-out engagement

O Work around distractions so can continue to Conduct/Evaluate/Monitor all-out engagement

O Work around distractions so can continue to Conduct/Evaluate/Monitor all-out engagement

K Reset Resources

B+I Recognition of Problem Conduct/Evaluate/Monitor all-out engagement

 

Incident #8: Stop Badgering Me

J Monitor Track of Interest

B+F Recognition of Problem (more than just a track of interest) + Identify Track

F Identify Track

C Actions to Avoid Escalation

E Monitor on-going situation

Incident #9: Power Projection

H Self Defense

I Conduct/Evaluate/Monitor all-out engagement

I Conduct/Evaluate/Monitor all-out engagement

I Conduct/Evaluate/Monitor all-out engagement

F+K Identify Track + Reset Resources

 

Incident #10: Harpoon Fire

B Recognition of Problem

A Determine Intent

D+H Actions towards engaging + Self Defense

O Uncoded

O Uncoded

 

Incident #11: E-2 Problems

F Identify Track

F Identify Track

G+E Allocate Resources + Monitor On-Going Situation

M Trouble Shoot

M+G Trouble Shoot + Resource Allocation

F Identify Track

K Reset Resources

 

Incident #12: Low Flying Non-Squawkers

B Recognition of Problem (more than just a track of interest)

F Identify Track

F+A Identify Track and Determine Intent

F+A Identify Track and Determine Intent

 

Incident #13: The Bear Box

J+F Monitor Track of Interest + Identify Track

B+G Recognition of Problem + Allocate Resources

M Trouble Shoot

G+I Resource Allocation + Conduct/Evaluate/Monitor all-out engagement

K Reset Resources

Incident #14: Phantom Exocet

N Determine Location

F Identify Track

E+N Monitor On-going situation .and. Determine Location

H+D Self Defense Actions toward engaging

B Recognition of Problem

E Monitor on-going situation

Appendix B. Cue inventories for each goal state listed in Figure 1.

 

Goal: Taking Actions toward Engaging Tracks

Information types:

T Track info (7/20) P Profiles, Experience, Knowledge (1/1)

S Status (2/4) R Restrictions, Constraints (2/4)

M Matchups (4/10) I Intelligence & Warnings (0/0)

C Communication (3/6) A Actions by track(s) (1/3)

O Other (1/1)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

T Course (6), intercept (1) 4

T Range (6) 6

R ROE (constraints) (3)

T Change in course (1)

R Warning/weapons status (1)

M Enemy weapons release range (3) 1

T Altitude (1)

T Point of origin (1)

C Visual reports (from bridge) (1)

M Own weapons capabilities reaction times (3) 1

S Own resource assets (3) 1

A EW bearing lines (2), jamming (1) 2

C Audio confirmation (3) 1

P Recent hostilities/activities (1)

M Control of assets (1)

S Fuel Status (1) 1

T Tracks (6) 1

T Speed (4) 4

M Own weapons release range (3) 1

C Audio reports (2)

O Reaction of automatic equipment (1)

 

 

Goal: Determine Intent

T Track info (13/27) P Profiles, Experience, Knowledge (3/4)

S Status (0/0) R Restrictions, Constraints (0/0)

M Matchups (0/0) I Intelligence & Warnings (1/1)

C Communication (0/0) A Actions by track(s) (2/7)

O Other (0/0)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

I Intel

P Recent hostilities/activities 1

T New track (1)

T Course (7) intercept, erratic, circling 3

T Range (7) 4

T Point of origin (3) 2

T Change in range (1)

A EW bearing (2), none (3), identify signature (1)

T Change in course (1)

P Knowledge of enemy tactics/weapon (2) 1

A Response to warnings, none (1)

T Speed

T Change in speed 1

T Number of tracks

T Altitude

T IFF 1

T Formation 1

P Flight profiles

T VAS

Goal: Actions to avoid escalation

T Track info (8/15) P Profiles, Experience, Knowledge (2/2)

S Status (0/0) R Restrictions, Constraints (0/0)

M Matchups (0/0) I Intelligence & Warnings (0/0)

C Communication (2/2) A Actions by track(s) (3/4)

O Other (0/0)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

A EW bearing lines (2)

T Change in course (1) 1

T Change in range (2)

P Profile (1)

T Range (4) 4

C Communication with ship (1) 1

T Course (4) 1

T Change in altitude (1)

T Location of tracks (1)

A Absence of reaction to illumination (1)

T Range, CAP to Link track (1)

A No visible hostilities (ROE) (1)

T Number of tracks (1) 1

C Communication with CAP 1

P Recent experiences/hostilities (1)

 

 

Goal: Recognition of Problem

T Track info (11/18) P Profiles, Experience, Knowledge (3/3)

S Status (1/1) R Restrictions, Constraints (2/3)

M Matchups (0/0) I Intelligence & Warnings (1/2)

C Communication (3/6) A Actions by track(s) (1/4)

O Other (0/0)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

T Course: orbiting (1), erratic (1), intercept (3) 3

A EW: search radar (1), radar (2), jamming (1) 4

S CIC crew status (1)

R Mission responsibility (flagship) (1)

P Ship formation (1) 1

R Previous experience (1), recent hostilities (1)

T Range (2)

C Communication with cruiser (2) 1

T Change in range 1

T Point of origin

T Speed of air contact

T Altitude

T VAS 1

T IFF, none (2)

P Corridors

I Intel (2)

P Profile com-air vs. military (1)

C Communication with E-2 1

C Confirming info (3) 2

T Number of tracks (1) 1

T Location of tracks (2)

T Absence of tracks (1) 1

Goal: Monitoring an On-going Situation

T Track info (9/34) P Profiles, Experience, Knowledge (2/3)

S Status (0/0) R Restrictions, Constraints (1/2)

M Matchups (3/3) I Intelligence & Warnings (0/0)

C Communication (3/3) A Actions by track(s) (2/4)

O Other (2/2)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

A EW bearing lines (1), change in (1), absence (1) 2

T Course (9) 3

T Change in course (2) 1

T Range (3) 2

O Elapsed time (1)

T Change in range (1)

T Altitude (2)

T Track evaluation (6), location (5), absence (1) 1

M Own defense systems (1)

R ROE (2)

A tripwire/expectancy - turn out is hostile act (1)

C Communication with CAP 1

M Enemy weapon's release range (1)

M Own weapon's release range (1)

T Speed (3) 1

O Sonar (1) 1

T Change in speed (1) 1

P Knowledge of enemy weapon/capabilities (2) 1

T Number of tracks (1)

C Communication with E-2 (1)

P Recent experience/hostilities (1)

C Communication with other ships (1)

Goal: Identify Track

T Track info (10/52) P Profiles, Experience, Knowledge (8/13)

S Status (0/0) R Restrictions, Constraints (0/0)

M Matchups (2/2) I Intelligence & Warnings (1/3)

C Communication (2/3) A Actions by track(s) (3/10)

O Other (1/1)

Cue Inventory:

Information Cue Frequency cue caused

types SA shift

T Course (10) 4

T Track eval (11), presence/absence (3)

T Speed (7) 5

T Range (1), unknown to hostile country (1) 2

A EW (5), absence (2) 3

P Knowledge of profile of helicopter (1)

P Knowledge of enemy weapons (1)

T Point of origin (2)

P Knowledge of com-air and routes (4) 2

T VAS (2) 1

T Altitude (4) 3

T IFF (6)

A Warnings did not work (1)

M Ship status, can defend (1) 1

A No signs of visible hostilities (ROE) (2) 2

P Knowledge of enemy EW signatures (2) 2

P Knowledge of tactics (1)

T Location (4) 2

O When (1)

P Knowledge of soviet flights in the area (1)

T Number of tracks (1) 2

C Confirmation info from other ships (2), CAP (1) 1

C Link with E-2 (2) 1

I Intel (3) 1

P Knowledge of profiles (2)

M Time - still time to ID (1) 1

P Previous experience (EP3) (1) 1

ACRONYM LIST

 

AAW Anti-Air Warfare

AAWC Anti-Air Warfare Coordinator

ATC Air Traffic Control

CALOW Contingency and Limited Objectives Warfare

CAP Combat Air Patrol

CDM Critical Decision Method

CIC Combat Information Centers

CIWS Close-in Weapon System

CO Commanding Officer

COA Course of Action

CRO Character Read Out

CSEDS Combat Systems Engineering and Development Site

DEFTT Decision Making Evaluation Facility for Tactical Teams (from

TADMUS project)

DSS Decision Support Systems

EW Electronic Warfare

GQ General Quarters

HCI Human Computer Interfaces

IFF Identify Friend or Foe

MAD Military Air Distress

MEZ Must Engage Zone

MSS Missile Systems Supervisor

NCCOSC Naval Command, Control and Ocean Surveillance Center

NDM Natural Decision Making

R.M. Resource Management

ROE Rules of Engagement

RPD Recognition-Primed Decision

A "model" proposed by KA to describe expert decision making.

SA Situation Assessment

SOP Standard Operating Procedures

TADMUS Tactical Decision Making Under Stress

TAO Tactical Action Officer

Right-hand person to the CO

VAB Variable Action Button