A Framework for Deception
Draft Report

Back - Next

Analysis and Design of Deceptions

A good model should be able to explain, but a good scientific model should be able to predict and a good model for our purposes should help us design as well. At a minimum, the ability to predict leads to the ability to design by random variation and selective survival with the survival evaluation being made based on prediction. In most cases, it is a lot more efficient to have the ability to create design rules that are reflective of some underlying structure.

Any model we build that is to have utility must be computationally reasonable relative to the task at hand. Far more computation is likely to be available for a large-scale strategic deception than for a momentary tactical deception, so it would be nice to have a model that scales well in this sense. Computational power is increasing with time, but not at such a rate that we will ever be able to completely ignore computational complexity in problems such as this.

A fundamental design problem in deception lies in the fact that deceptions are generally thought of in terms of presenting a desired story to the target, while the available techniques are based on what has been found to work. In other words, there is a mismatch between available deception techniques and technologies and objectives.

A Language for Analysis and Design of Deceptions

Rather than focus on what we wish to do, our approach is to focus on what we can do and build up 'deception programs' from there. In essence, our framework starts with a programming language for human deception by finding a set of existing primitives and creating a syntax and semantics for applying these primitives to targets. We can then associate metrics with the elements of the programming language and analyze or create deceptions that optimize against those metrics.

The framework for human deception then has three parts:

The astute reader will recognize this as the basis for a computer language, but it has some differences from most other languages, most fundamentally in that it is probabilistic in nature. While most programming languages guarantee that when you combine two operators together in a sequence you get the effect of the first followed by the effect of the second, in the language of deception, a sequence of operators produces a set of probabilistic changes in perceptions of all parties across the multi-dimensional space of the properties of deception. It will likely be effective to "program" in terms of desired changes in deception properties and allow the computer to "compile" those desired changes into possible sequences of operators. The programming begins with a 'firing table' of some sort that looks something like the following table, but with many more columns filled in and many more details under each of the rows. Partial entries are provided for technique 1 which, for this example, we will choose as 'audit suppression' by packet flooding of audit mechanisms using a distributed set of previously targeted intermediaries.
Deception Property Technique 1 ... Technique n
name Audit Suppression
general concept packet flooding of audit mechanisms
means using a distributed set of intermediaries
target type computer
resources consumed reveals intermediaries which will be disabled with time
effect on focus of attention induces focus on this attack
concealment conceals other actions from target audit and analysis
simulation n/a
memory requirements and impacts overruns target memory capacity
novelty to target none - they have seen similar things before
certainty of effect 80% effective if intel is right
extent of effect reduces audits by 90% if effective
timeliness of effect takes 30 seconds to start
duration of effect until ended or intermediaries are disabled
security requirements must conceal launch points and intermediaries
target system resource limits memory capacity, disk storage, CPU time
deceiver system resource limits number of intermediaries for attack, pre-positioned assets lost with attack
the effects of small changes nonlinear effect on target with break point at effectiveness threshold
organizational structure and constraints Going after known main audit server which will impact whole organization audits
target knowledge OS type and release
dependency on predisposition Must be proper OS type and release to work
extent of change in target mind set Large change - it will interrupt them - they will know they are being attacked
feedback potential and availability Feedback apparent in response behavior observed against intermediaries and in other fora
legality Illegal except at high intensity conflict - possible act of war
unintended consequences Impacts other network elements, may interrupt other information operations, may result in increased target security
the limits of modeling Unable to model overall network effects
counterdeception If feedback known or attack anticipated, easy to deceive attacker
recursive properties only through counter deception
possible deception story We are concealing something - they know this - but they don't know what

Considering that the total number of techniques is likely to be on the order of several hundred and the vast majority of these techniques have not not been experimentally studied, the level of effort required to build such a table and make it useful will be considerable.

Attacker Strategies and Expectations

For a moment, we will pause from the general issue of deception and examine more closely the situation of an attacker attempting to exploit a defender through information system attack. In this case, there is a commonly used attack methodology that subsumes other common methodologies and there are only three known successful attack strategies identified by simulation and verified against empirical data. We start with some background.

The pathogenesis of diseases has been used to model the process of breaking onto computers and it offers an interesting perspective. [63] In this view, the characteristics of an attack are given in terms of the survival of the attack method.
Table 7.1 from "Emerging Viruses"
1 Stability in environment
2 Entry into host - portal of entry
3 Localization in cells near portal of entry
4 Primary replication
5 Non-specific immune response
6 Spread from primary site (blood, Nerves)
7 Cells and tissue tropism
8 Secondary replication
9 Antibody and cellular immune response
10 Release from host
"Pathogenesis of Computer Viruses"
1 Stability in environment
2 Entry into host - portal of entry
3 Localization in software near portal of entry
4 Primary replication
5 Non-specific immune response
6 Spread from primary site (disk, comms)
7 Program and data tropism
8 Secondary replication
9 Human and program immune response
10 Release from host
"Pathogenesis of Manual Attacks"
1 Stability in environment
2 Entry into host - portal of entry
3 Localization near portal of entry
4 Primary modifications
5 Non-specific immune response
6 Spread from primary site (privilege expansion)
7 Program and data tropism (hiding)
8 Secondary replication
9 Human and program immune response
10 Release from host (spread on)

This particular perspective on attack as a biological process ignores one important facet of the problem, and that is the preparation process for an intentional and directed attack. In the case of most computer viruses, targeting is not an issue. In the case of an intelligent attacker, there is generally a set of capabilities and an intent behind the attack. Furthermore, survival (stability in the environment) would lead us to the conclusion that a successful attacker who does not wish to be traced back to their origin will use an intelligence process including personal risk reduction as part of their overall approach to attack. This in turn leads to an intelligence process that precedes the actual attack.

The typical attack methodology consists of:

There are loops from higher numbers to lower numbers so that, for example, privilege expansion can lead back to intelligence and system entry or forward to subversion, and so forth. In addition, attackers have expectations throughout this process that adapt based on what has been seen before this attack and within this attack. Clean up, observation of effects, and analysis of feedback for improvement are also used throughout the attack process.

Extensive simulation has been done to understand the characteristics of successful attacks and defenses. [5] Among the major results of this study were a set of successful strategies for attacking computer systems. It is particularly interesting that these strategies are similar to classic military strategies because the simulation methods used were not designed from a strategic viewpoint, but were based solely on the mechanisms in use and the times, detection, reaction, and other characteristics associated with the mechanisms themselves. Thus the strategic information that fell out of this study was not biased by its design but rather emerged as a result of the metrics associated with different techniques. The successful attack strategies identified by this study included:

Slow, loud attacks tend to be detected and reacted to fairly easily. A successful attacker can use combinations of these in different parts of an attack. For example, speed can be used for a network scan, stealth for system entry, speed for privilege expansion and planting of capabilities, stealth for verifying capabilities over time, and overwhelming force for exploitation. This is a typical pattern today.

Substantial red teaming and security audit experience has led to some speculations that follow the general notions of previous work on individual deception. It seems clear from experience that people who use computers in attacks:

If this turns out to be true, it has substantial implications for both attack and defense. Experiments should be undertaken to examine these assertions as well as to study the combined deception properties of small groups of people working with computers in attacking other systems. Unfortunately, current data is not adequate to thoroughly understand these issues. There may be other strategies developed by attackers, other attack processes undertaken, and other tendencies that have more influence on the process. We will not know this until extensive experimentation is done in this area.

Defender Strategies and Expectations

From the deceptive defender's perspective, there also seem to be a limited set of strategies.

As in the case with attacker strategies, few experiments have been undertaken to understand these issues in detail, but preliminary experiments appear to confirm these notions.

Planning Deceptions

Several authors have written simplistic analyses and provided rules of thumb for deception planning. There are also some notions about planning deceptions under the present model using the notions of low, middle, and high level cognition to differentiate actions and create our own rules of thumb with regard to our cognitive model. But while notions are fine for contemplation, scientific understanding in this area requires an experimental basis.

According to [10] a 5-step process is used for military deception. (1) Situation analysis determines the current and projected enemy and friendly situation, develops target analysis, and anticipates a desired situation. (2) Deception objectives are formed by desired enemy action or non-action as it relates to the desired situation and friendly force objectives. (3) Desired [target] perceptions are developed as a means to generating enemy action or inaction based on what the enemy now perceives and would have to perceive in order to act or fail to act - as desired. (4) The information to be conveyed to or kept from the enemy is planned as a story or sequence, including the development and analysis of options. (5) A deception plan is created to convey the deception story to the enemy.

These steps are carried out by a combination of commander and command staff as an embedded part of military planning. Because of the nature of military operations, capabilities that are currently available and which have been used in training exercises and actual combat are selected for deceptions. This drives the need to create deception capabilities that are flexible enough to support the commander's needs for effective use of deceptions in a combat situation. From a standpoint of information technology deceptions, this would imply that, for example, a deceptive feint or movement of forces behind smoke screens with sonic simulations of movement should be supported by simulated information operations that would normally support such action and concealed information operations that would support the action being covered by the feint.

Deception maxims are provided to enhance planner understanding of the tools available and what is likely to work: [10]

Deception failures are typically associated with (1) detection by the target and (2) inadequate design or implementation. Many examples of this are given. [10]

As a doctrinal matter, Battlefield deception involves the integration of intelligence support, integration and synchronization, and operations security. [10]

In the DoD context, it must be assumed that any enemy is well versed in DoD doctrine. This means that anything too far from normal operations will be suspected of being a deception even if it is not. This points to the need to vary normal operations, keep deceptions within the bounds of normal operations, and exploit enemy misconceptions about doctrine. Successful deceptions are planned from the perspective of the targets.

The DoD has defined a set of factors in deceptions that should be seriously considered in planning [10]. It is noteworthy that these rules are clearly applicable to situations with limited time frames and specific objectives and, as such, may not apply to situations in information protection where long-term protection or protection against nebulous threats are desired.

Deception of humans and automated systems involves interactions with their sensory capabilities. [10] For people, this includes (1) visual (e.g., dummies and decoys, camouflage, smoke, people and things, and false vs. real sightings), (2) Olfactory (e.g., projection of odors associated with machines and people in their normal activities at that scale including toilet smells, cooking smells, oil and gas smells, and so forth), (3) sonic (e.g., directed against sounding gear and the human ear blended with real sounds from logical places and coordinated to meet the things being simulated at the right places and times) (4) electronic (i.e., manipulative electronic deception, simulative electronic deception, and imitative electronic deception).

Resources (e.g., time, devices, personnel, equipment, materiel) are always a consideration in deceptions as are the need to hide the real and portray the false. Specific techniques include (1) feints, (2) demonstrations, (3) ruses, (4) displays, (5) simulations, (6) disguises, and (7) portrayals. [10]

A Different View of Deception Planning Based on the Model from this Study

A typical deception is carried out by the creation and invocation of a deception plan. Such a plan is normally based on some set of reasonably attainable goals and time frames, some understanding of target characteristics, and some set of resources which are made available for use. It is the deception planner's objective to attain the goals with the provided resources within the proper time frames. In defending information systems through deception our objective is to deceive human attackers and defeat the purposes of the tools these humans develop to aid them in their attacks. For this reason, a framework for human deception is vital to such an undertaking.

All deception planning starts with the objective. It may work its way back toward the creation of conditions that will achieve that objective or use that objective to 'prune' the search space of possible deception methods. While it is tempting for designers to come up with new deception technologies and turn them into capabilities; (1) Without a clear understanding of the class of deceptions of interest, it will not be clear what capabilities would be desirable; and (2) Without a clear understanding of the objectives of the specific deception, it will not be clear how those capabilities should be used. If human deception is the objective, we can begin the planning process with a model of human cognition and its susceptibility to deception.

The skilled deception planner will start by considering the current and desired states of mind of the deception target in an attempt to create a scenario that will either change or retain the target's state of mind by using capabilities at hand. State of mind is generally only available when (1) we can read secret communications, (2) we have insider access, or (3) we are able to derive state of mind from observable outward behavior. Understanding the limits of controllable and uncontrollable target observables and the limits of intelligence required to assure that the target is getting and properly acting (or not acting) on the information provided to them is a very hard problem.

Deception Levels

In the model depicted above and characterized by the diagram below, three levels can be differentiated for clearer understanding and grouping of available techniques. They are characterized here by mechanism, predictability, and analyzability:

Human Deception Levels

Level Mechanism Predictability Analysis Summary
Low-level

Low-level deceptions operate at the lower portions of the areas labeled observables and actions. They are designed to cause the target of the deception to be physically unable to observe signals or to cause the target to selectively observe signals.

Low-level deceptions are highly predictable based on human physiology and known reflexes.

Low-level deceptions can be analyzed and very clearly characterized through experiments that yield numerical results in terms of parameters such as detection thresholds, response times, recovery times, edge detection thresholds, and so forth.

Except in cases where the target has sustained physiological damage, these deceptions operate very reliably and predictably. The time frames for these deceptions tend to be in the range of milliseconds to seconds and they can be repeated reliably for ongoing effect.

Mid-Level

Mid-level deceptions operate in the upper part of the areas labeled Observables and Actions and in the lower part of the areas marked Assessment and Capabilities. They are generally designed to either: (1) cause the target to invoke trained or pattern matching based responses and avoid deep thought that might induce unfavorable (to us) actions; or (2) induce the target to use high level cognitive functions, thus avoiding faster pattern matching responses.

Mid-level deceptions are usually predictable but are affected by a number of factors that are rather complex, including but not limited to socialization processes and characteristics of the society in which the person was brought up and lives.

Analysis is based on a substantial body of literature. Experiments required for acquiring this knowledge are complex and of limited reliability. There are a relatively small number of highly predictable behaviors. These relatively small number of behaviors are common and are invoked under predictable circumstances.

Many mid-level deceptions can be induced with reasonable certainty through known mechanisms and will produce predictable results if applied with proper cautions, skills, and feedback. Some require social background information on the subject for high surety of results. The time frame for these deceptions tends to be seconds to hours with lasting residual effects that can last for days to weeks.

High-level

High-level deceptions operate from the upper half of the areas labeled Assessment and Capabilities to the top of the chart. They are designed to cause the subject to make a series of reasoned decisions by creating sequences of circumstances that move the individual to a desired mental state.

High-level deceptions are reasonably controlled if adequate feedback is provided, but they are far less certain to work than lower level deceptions. The creation and alteration of expectations has been studied in detail and it is clearly a high skills activity where greater skill tends to prevail.

High-level deception requires a high level of feedback when used against a skilled adversary and less feedback under mismatch conditions. There is a substantial body of supporting literature in this area but it is not adequate to lead to purely analytical methods for judging deceptions.

High level deception is a high skills game. A skilled and properly equipped team has a reasonable chance of carrying out such deceptions if adequate resources are applied and adequate feedback is available. These sorts of deceptions tend to operate over a time frame of hours to years and in some cases have unlimited residual effect.

Deception Guidelines

This structuring leads to general guidelines for effective human deception. In essence, they indicate the situations in which different levels of deception should be used and rules of thumb for their use.
Low-Level

- Higher certainty can be achieved at lower levels of perception.
- Deception should be carried out at as low a level as feasible.
- If items are to be hidden and can be made invisible to the target's sensors, this is preferred.
- If a perfect simulation of a desired false situation can be created for the enemy sensors, this is preferred.
- Do not invoke unnecessary mid-level responses and pattern matching
- Try to avoid patterns that will create dissonance or uncertainty that would lead to deeper inspection.

Mid-Level

- If a low-level deception will not work, a mid-level deception must be used.
- Time pressure and high stress combine to keep targets at mid-level cognitive activities.
- Activities within normal situational expectations tend to be handled by mid-level decision processes.
- Training tends to generate mid-level decision processes.
- Mid-level deceptions require feedback for increased assurance.
- Remain within the envelope of high-level expectations to avoid high level analysis.
- Exceed the envelope of high-level expectations to trigger high level analysis.

High-Level

- If the target cannot be forced to make a mid-level decision in your favor, a high-level deception must be used.
- It is easiest to reinforce existing predispositions.
- To alter predisposition, high-level deception is required.
- Movement from predisposition to new disposition should be made at a pace that does not create dissonance.
- If target confusion is desired, information should be changed at a pace that creates dissonance.
- In high-level deceptions, target expectations must be considered at all times.
- High-level deceptions require the most feedback to measure effect and adapt to changing situations.

Just as Sun Tzu created guidelines for deception, there are many modern pieces of advice that probably work pretty well in many situations. And like Sun Tzu, these are based on experience in the form of anecdotal data. As someone once said: The plural of anecdote is statistics.

Deception Algorithms

As more and more of these sorts of rules of thumb based on experience are combined with empirical data from experiments, it is within the realm of plausibility to create more explicit algorithms for decision planning and evaluation. Here is an example of the codification of one such algorithm. It deals with the issue of sequencing of deceptions with different associated risks identified above.

Let's assume you have two deceptions, A (low risk) and B (high risk). Then, if the situation is such that the success of either means the mission is accomplished, the success of both simply raises the quality of the success (e.g. it costs less), and the discovery of either by the target will increase the risk that the other will also fail, then you should do A first to assure success. If A succeeds you then do B to improve the already successful result. If A fails, you either do something else or do B out of desperation. On the other hand, if the situation is such that the success of both A and B are required to accomplish the mission and if the discovery of either by the target early in execution will result in substantially less harm than discovery later in execution, then you should do B first so that losses are reduced if, as is more likely, B is detected. If B succeeds, you then do A. Here this is codified into a form more amenable to computer analysis and automation:

GIVEN: Deception A (low risk) and Deception B (high risk).
IF [A Succeeds] OR [B Succeeds] IMPLIES [Mission Accomplished, Good Quality/Sched/Cost]
AND [A Succeeds] AND [B Succeeds] IMPLIES [Mission Accomplished, Best Quality/Sched/Cost]
AND [A Discovered] OR [B Discovered ] IMPLIES [A (higher risk) AND B (higher risk)]
THEN    DO B [comment: Do high-risk B first to insure minimal loss in case of detection]
        IF [B Succeeds] DO A (Late) [comment: Do low-risk A second to improve outcome]
                        ELSE DO Out #1 [comment: Do higher-risk A because you're desperate.]
               	        OR ELSE DO Out #n [comment: Do something else instead.]

IF [A Succeeds] OR [B Succeeds] IMPLIES [Mission Accomplished, Good Quality/Sched/Cost]
AND [A Detected] OR [B Detected] IMPLIES [Mission Fails]
AND [A Discovered Early] OR [B Discovered Early] IMPLIES [Mission Fails somewhat]
AND [A Discovered Late] OR [B Discovered Late] IMPLIES [Mission Fails severely]
THEN	DO B [comment: Do high-risk B first to test and advance situation]
        IF [B Early Succeeds] DO A (Late) [comment: Do low-risk A second for max chance of success]
	        IF [A Late Succeeds (likely)] THEN MISSION SUCCEEDS.
                ELSE [A Late Fails (unlikely)] THEN MISSION FAILS/in real trouble.
        ELSE [B Early Fails] [Early Failure]
                DO Out #1 [comment: Do successful retreat as pre-planned.]
                OR DO Out #m [comment: Do another pre-planned contingency instead.]

We clearly have a long way to go in codifying all of the aspects of deception and deception sequencing in such a form, but just as clearly, there is a path to the development of rules and rule-based analysis and generation methods for building deceptions that have effect and reduce or minimize risk, or perhaps optimize against a wide range of parameters in many situations. The next reasonable step down this line would be the creation of a set of analytical rules that could be codified and experimental support for establishing the metrics associated with these rules. A game theoretical approach might be one of the ways to go about analyzing these types of systems.

Back - Next