








|
|

Information Warfare and the Future of the Spy
Philip H.J. Davies
Department of Sociology, University of Reading
ABSTRACT
This article examines the impact of the new ICTs on the
collection of covert intelligence and covert political
actions undertaken by national intelligence agencies. It
is argued that there exist two distinct doctrines in the
literatures of intelligence and information warfare
concerning the future relative importance of information
from human sources ('agents') and technical methods
(signal interception, overhead imagery and the emerging
field of clandestine penetration of networked information
systems). The arguments in favour of human and technical
methods are examined in the context of information
warfare techniques and technologies, as are covert action
methods such as disinformation, disruptive action and
'cyber-sabotage'. Certain civil liberties implications of
ICT-based strong encryption are also examined. The
article concludes that what is required is a greater
emphasis on integrating
human and technical methods into a unified whole,
especially where human methods can provide opportunities
which can be further exploited by technical methods.
KEYWORDS
action, covert, espionage, infosec, infowar, intelligence
Is information warfare a completely new form of
conflict that exists because of the burgeoning global
information infrastructure or is it merely a new
dimension of an old form, like spying, whose origins lie
in the 'grayware' of the human brain? (RAND Research
Review Information Warfare and Cyberspace Security 1995)
There were cowboys ever since there were computers.
They built the first computers to crack German ice,
right? Codebreakers. So there was ice before computers,
you wanna look at it that way.(Gibson 1987)
INTRODUCTION
Information warfare represents one aspect of what has
come to be known as the 'revolution in military affairs'.
While this somewhat millennial view of how the scope and
nature of conflict has changed since the end of the Cold
War and the Second Gulf War is more generally accepted in
North America than the UK, there can be no doubt that
recent information technology, from computers through to
satellites represents a new arena of threats and
opportunities, and if military affairs must change to
keep pace, so must virtually all other aspects of the
national security infrastructure. Amongst the elements of
national government facing the burgeoning new
'infosphere', intelligence more than almost any other is
about information. However, apart from Winn Schwartau's
offhand rhetorical arabesque that 'spies are the original
information warriors' (1996: 68), the role of secret
intelligence in this emerging strategic environment is
something which has received too little attention, and
what thought has been given the matter requires careful
reappraisal and reassessment.
What has emerged in the literature has been two competing
views of intelligence. One view stresses the high-tech
opportunities of cybernetic penetration and disruption,
and is concerned with digital espionage and sabotage by
Gibsonesque 'console cowboys' who 'hack into' information
systems in a globally networked cyberspace. Since the
Second World War the primary sources of intelligence
'take' have been technical methods, or TECHINT,
traditionally composed mainly of signals intelligence
(SIGINT) and imagery intelligence (IMINT). To these the
information warfare enthusiast would add cybernetic
operations, what might be called, for lack of a better
term (although a worse one is hard to imagine) HACKINT.1
The other approach, however, points to the distributed,
easily concealed and pervasively low-intensity conflict
world of terrorism, proliferation and transnational
organized crime, and anticipates a new age of human
intelligence, or HUMINT. Thus intelligence policy-makers,
seeking to redesign the world's intelligence communities
for the near future and trying to allocate increasingly
scarce intelligence resources, are confronted with two
competing views of the coming decades, that the future
holds either a fast-paced, high-bandwidth TECHINT world,
or that it holds instead a dimly lit Machiavellian HUMINT
world. However, neither of these two views adequately
represents the situation. In the first place, the
technophile view which is put forward underestimates the
limitations of technical methods, while in the second
place the HUMINT view proceeds with a weak understanding
of how terrorism, proliferation and serious crime are
increasingly using the global infosphere as their
operating medium. That intelligence policy-makers must
somehow optimize their commitments to human and technical
methods may be a truism, but how that balance is to be
evaluated is far from a trivial calculation, and what
both views of the future underestimate is the
interdependency between human and technical methods.
HUMINT Vs TECHINT I: WOOLSEYAN SNAKES ON THE
INFORMATION SUPERHIGHWAY
The world of information warfare is, for the most part, a
high-bandwidth, high transmission rate universe, and
references to espionage and covert action in the
literature have, for the most part, reflected that.
Martin Libicki of the National Defence University has
proposed a sub-class of information warfare that he has
termed 'Intelligence-Based Warfare', or IBW. According to
Libicki:[IBW] occurs when intelligence is fed directly
into operations (notably, targeting and battle damage
assessment) rather than used as an input for overall
command and control . . .As sensors grow more acute and
reliable, as they proliferate in type and number, and as
they become capable of feeding fire control systems in
real time and near-real-time, the task of developing,
maintaining and exploiting systems that sense the
battlespace, assess its composition, and send the results
to shooters assumes increasing importance for tomorrow's
militaries.(Libicki 1995)
However, Libicki's concept is very much confined to
military operational intelligence, to the fast-paced
high-bandwidth battlefield of the late twentieth and
imagined twenty-first centuries, in which the humble
footsoldier holds the grandiose title 'warfighter' and
Pentagon press releases assure us that high-tech enhances
not only lethality and survivability but also 'operations
tempo' (Der Derian 1994: 118). Secret intelligence is a
very different matter, in which operations are planned
and undertaken in terms of months or years rather than
minutes and seconds, and the content and sensitivity
thereof tend to confine its product to strategic rather
than tactical concerns. None the less, a great deal has
been made of the potential for cybernetic penetrations to
access and disrupt sensitive information systems, with
the dangers of 'hacker war' not to be underestimated in
an increasingly computer-dependent world. There has also
been an accumulating body of evidence that this concern
is well founded. Since Clifford Stoll published The
Cuckoo's Egg (1990) the world has become well acquainted
with the methods employed by Marcus Hess, the 'Hannover
Hacker', and his colleagues on behalf of the KGB.
However, it might well be argued that the approach of
'hacking into' computer systems is a great deal more
limited than might appear in the popular media and some
of the more optimistic literature on information warfare.
Computer systems can be made secure by physical isolation
('stand alone' systems), or placing them behind hardware
defences such as one-way gates and software defences or
'firewalls'. Although Marcus Hess's ring gained access
to, according to one estimate, 'fifty military computers
at the Pentagon, various defence contractors, the Los
Alamos Nuclear Weapons Laboratory, Argonne National
Laboratory, the Air Force Space Systems Division and
various US military bases around the world (Madsen 1993:
418), perhaps the most distinctive feature of this effort
was the very low grade of information acquired. Stoll,
the astronomer-turned-sysop who pursued Hess down the
telephone lines to Germany, found US Federal Agencies
deeply uninterested in pursuing Hess on the grounds that
none of the systems attacked held classified or secret
information (Stoll 1990: 233 and passim). Indeed, a 1996
report by the US General Accounting Office recently
estimated that Pentagon systems had been attacked roughly
250,000 times during 1995, with some 160,000 successful
penetrations resulting. The General Accounting Office
investigation was brought about by the penetration by a
British 16-year-old hacker calling himself the
'Datastream Cowboy', acting under the supervision of a
figure using the e-mail handle 'Kuji' whom US authorities
suspected of being a foreign intelligence officer. Just
as with the case of the Hannover Hacker, however, the
Pentagon has taken the reports calmly, noting that no
systems handling classified or secret information has
been compromised (Walker 1996). In the event, Kuji and
the Datastream Cowboy turned out to be far less sinister
than US defence officials expected Ð two British
adolescents determined to find evidence of
X-files-reminiscent conspiracies and aliens (Campbell
1997). Curious in all of this, however, is the peculiar
combination of one group of US defence information
security specialists pursuing a perceived threat to the
bitter end while another group write such penetrations
off as non-threatening because no 'sensitive
compartmented information' (SCI) has been compromised.
Unfortunately, it is either naive or disingenuous to
suggest that non-classified information has no
intelligence value. Much of the information compromised
in the US defence computers has been logistical, and this
alone can be highly revealing about a nation's defence
capabilities and intentions.2 Operation-ally, cybernetic
penetrations achieved by those such as Marcus Hess and
the Datastream Cowboy occupy a grey zone between open
sources and covert collection. Much as train watching may
be a legitimate, if esoteric, pastime during peace, it
has also traditionally been a major field of HUMINT
collection, monitoring the movements of troops and stores
in enemy territory, during times of war (see, for
example, Landau, 1934 passim or Verrier 1983: 188Ð9).
The acquisition of logistical intelligence becomes the
subject of covert collection as soon as it is locked in
office cabinets or placed behind user names and
passwords.Moreover, in many systems, once one is past the
front line defences, the data stored on and electronic
correspondence traffic conducted therein is en claire.
E-mail, especially, represents a potentially rich vein of
raw information, in the same league as telephone and
postal interception. If nothing else, such
non-classified, so-called 'low-grade' intelligence can
provide an analytical background in which context
higher-grade intelligence sources may be interpreted and
assessed. Regardless of how one might debate the real
intelligence payoff of HACKINT to date, the ability to
slip past security protocols and snatch passwords used
within a system with sniffer programmes has made the
development of 'firewalls', overlaid upon a TCP/IP
architecture developed originally to promote wider access
rather than constrain it, a growth industry.
With all the discussion of the technical collection
methods arising out of the new information technology, it
comes, therefore, as something of a surprise that in
their book War and Antiwar, with its emphasis on
high-tech, computer-intensive 'brain force' forms of
conflict, Alvin and Heidi Toffler should wheel about and
conclude that the prevalent form of intelligence
gathering for the 'third wave' would be HUMINT. 'The
shift to a third wave intelligence system', they propose,
'paradoxically, means a stronger emphasis on human spies,
the only kind available in the 'first wave' world. Only
now, first wave spies come armed with sophisticated third
wave technologies' (Toffler and Toffler 1995: 186). They
do not indicate what third wave technologies will arm
these HUMINT sources, one can only infer that they are
referring to improved clandestine communications, since
they rule out TECHINT methods. Their reasons are that
'the best satellites can't peer into a terrorist's mind',
nor into that of Saddam Hussein. In this sense, the
Tofflers are repeating an argument for the virtues of
HUMINT which has been in play throughout the increasingly
TECHINT-intensive Cold War, which is that satellites can
show where an adversary's armies are located, but they
cannot tell you what he or she intends to do with them.
The Tofflers are also falling in line with a view of the
changing priorities of HUMINT and TECHINT which has been
propagated by intelligence policy pundits since the end
of the Cold War. Briefly put, the small number of
large-scale threats presented by the Cold War nuclear
stand-off have been replaced by a large number of
small-scale threats such as terrorism, local wars and
serious crime which, while less amenable to imagery and
signals intelligence, none the less present a very real
need for timely and reliable intelligence. As former DCI
James Woolsey has observed, the Soviet dragon may have
been slain but the global forest is filled with snakes. A
number of authors have therefore suggested that
intelligence is headed for a new era dependent upon
HUMINT to pursue the snakes of the 1980s (Adams 1994:
311Ð15; Boren 1994: 55Ð6).
The shift from TECHINT to HUMINT certainly appears
compelling if the proliferation of small-scale,
distributed intelligence threats after the Cold War is
taken out of the context of the growing global
infosphere. In February 1996, during a briefing to
British academics under Chatham House rules, a senior UK
intelligence officer was asked whether SIGINT had indeed
waned in importance as expected. Surprisingly, he
responded with a firm denial. 'If anything', he said, 'it
is even more important. More terrorists and drug barons
are using cellular phones and satellites to talk to each
other than ever before.' The lesson here appears to be
that the increasing opportunities for technical methods
presented by the rapidly expanding, world-wide
information infrastructure have more than offset the
difficulties of locating and targeting the new threats.
Perhaps IMINT may fade in importance, but in its place
hacking and communications intelligence are rapidly
expanding. During the last decade of the Cold War, the
French human source FAREWELL provided documents to the
French security service, the Direction Securite
Territoriale (DST), indicating that up to 2.4 per cent of
the entire Soviet intelligence effort was taken up by
cybernetic penetrations alone (Madsen 1993: 419) and this
in a society with a far lower per capita availability of
the necessary hardware and, skilled operators than the
West, and during a period when cyberspace was a far more
radically circumscribed place than it is now. Technical
methods produce intelligence sui generis; human sources
cannot and should not be expected to produce on the same
quantitative scale as technical ones. Thus it seems
likely that technical methods will remain the large-scale
producers, generating the lion's share of the national
raw intelligence 'take' for the foreseeable future.
However, information technology may be said to 'giveth
with one hand, and taketh away with the other', because
the same technologies which promise so much to collection
may just as easily deny intelligence collectors that
wealth. For there is a sting in the tail of the general
availability of inexpensive, personal information
technology; and that is the general availability of
inexpensive, personal information security.
HUMINT Vs TECHINT II: OF STRONG ENCRYPTION AND
THE TWO-LEGGED SPY
It is commonly accepted that HUMINT operations all too
often depend upon leads from SIGINT. The wartime Double
Cross operation depended enormously on the ISOS breaks
against German Abwehr Enigma traffic, while the VENONA
decrypts proved valuable tools in detecting the Soviet
atom bomb spies during the opening years of the Cold War.
However, what is all too often underestimated is the
frequent dependency of TECHINT on initial breaks provided
by conventional espionage. Perhaps the most famous, and
possibly in global historical terms the most
consequential SIGINT breaks ever, were the Allied
successes against German, Italian and Japanese Enigma
machine codes during the Second World War. While most of
the decrypts produced in the Allied cryptanalytical
efforts may have relied on mechanical innovations such as
multiple, parallel Enigma engines, the so-called
'bombes', and the first generation of computers such as
Colossus, the ability to develop these techniques
initially depended on a human intelligence coup by the
French Deuxieme Bureau in the form of the clandestine
acquisition of detailed drawings of the German Army
Enigma provided by walk-in Hans-Thilo Schmidt (Stengers
1984: 127Ð8). In 1948 Vienna, it was the reports of a
human source in the Austrian postal and
telecommunications service that alerted the SIS Head of
Station to the fact that telephone lines used by Russian
forces in the Soviet Sector passed under the British
sector, leading to the tunnelling operations there and in
Berlin during the 1950s (Blake 1990: 8Ð9). Similarly,
Soviet COMINT benefited enormously in the 1970s and early
1980s from the KGB's Walker spy ring providing them with
current US Navy keys and other cryptomaterials, while a
great deal of their most important information about US
satellites came from Andrew Daulton Lee and Christopher
Boyce (Andrew and Gordievsky 1991: 437Ð9, 440Ð2). In
the same fashion, it was Soviet rocketry manuals, given
to the SIS and CIA by Oleg Penkovsky in Soviet Military
Intelligence, that helped photo-interpreters identify
Soviet missiles in Cuba as well as in the Asian
heartland. Because of this interdependency, Britain's SIS
has, since the Second World War, maintained a liaison or
'Requirements' Section representing GCHQ, the UK's SIGINT
service, at its headquarters circulating intercepts
required for SIS operations, on the one hand, while
issuing GCHQ requirements for SIS acquisition of
cryptomaterials on the other (Davies 1995). Similarly,
the Cold War KGB's foreign operations First Chief
Directorate established a separate Sixteenth Department
to task rezidentura abroad in support of Sixteenth
Directorate SIGINT operations (the Sixteenth Department
was responsible for sources like the Walker ring and
Britain's Geoffrey Prime) (Andrew and Gordievsky 1991:
440). SIGINT, indeed TECHINT in general, is not a
stand-alone collection method, and there are good reasons
to believe that HACKINT is likely to be no different from
IMINT and SIGINT.
Despite recent successes against DES encryption (Levy
1996: 105Ð8; Ward 1997b: 19), INFOSEC remains a very
real potential limiter in the use of COMINT and HACKINT.
The days of negligent sysops leaving factory-issue user
names and passwords in place are fast passing, while at
the same time firewall development is a fast-growing
industry. Libicki has identified the necessarily
operational implication of all of this off-handedly,
almost by accident. Even though many computer systems run
with insufficient regard for network security, computer
systems can nevertheless be made secure. They can be (not
counting traitors on the inside), in ways that, say,
neither a building nor a tank can be. (Libicki 1995)
Although Libicki is undoubtedly overestimating the
robustness of INFOSEC techniques, and underestimating the
volume of resources which may be directed at a target
system by a determined assailant, there can be little
doubt that the potential of technical methods against
modern information security measures such as 'hacking',
intercepting satellite uplink/downlinks and detecting
hardware emanations3 have been likewise exaggerated.
Apart from the likelihood that it is far easier to design
a computationally intractable encryption algorithm than
it is to find a way around that algorithm, there are also
very good historical reasons to argue that COMSEC and
INFOSEC techniques tend to run ahead of TECHINT, and that
technical penetrations tend to require a kick start from
less sophisticated intelligence methods. Even with the
resources of a national SIGINT service, to attack an
encrypted system success may prove entirely elusive.
Consider the fact that during the Cold War, for ambitious
careerist officers within the NSA it was the developing
world rather than Soviet Bloc divisions that were
considered to be where the real action was. This was
because the Soviet machine-based ciphers were almost
unassailably strong, and it was more cost-efficient to
search the inferior cryptosystems of Soviet Bloc
satellites in the Third World for Soviet-originated
information echoed in their own traffic (Bamford 1983:
xxÐxxi; Laqueur 1985: 30Ð1). Similarly, despite the
successes of the various Enigma breaks, certain wartime
Axis machine codes were never broken. Thus, although once
a system has been penetrated, technical methods may
produce vast quantities of raw intelligence sui generis,
the initial break is always a delicate matter, all too
often resulting from a human operation. What is said
about human sources regarding distributed threats is
indeed true: human sources very often can go where
technical ones may not Ð but they are most useful when
they make it possible for technical methods to follow.
Information technology makes the kind of information
security which drives Libicki's doubts about cybernetic
penetrations cheap and readily available. A mathematician
and programmer acquaintance of the author once developed
a relatively strong 8-digit prime public key encryption
system for his own uses in the mid-1980s, and promptly
made that system available in the public domain via a
bulletin board system (BBS). Just as it is joked that
there is no such thing as a penniless chemistry graduate
in Northern Ireland, there is an enormous potential
demand for skilled cryptographers in cyberspace, and not
always for legitimate national or corporate purposes.
There may be various attempts to control strong
encryption it is banned in France, and legally a
'munition' in the United States such constraints are as
likely to be effective in cyberspace as similar attempted
blocks on the Internet against the Big Book of Mischief
or pornography. Indeed, one determined Scandinavian user
recently successfully acquired the full paper (rather
than the actual software) specifications of the
non-export version of the Pretty Good Privacy (PGP)
encryption software under the constitutional umbrella of
freedom of speech (Ward 1997c). Few things are easier to
smuggle than information and where export controls may be
vigorously enforced, the development of proprietary
strong encryption systems, like the early days of
programming in general, has all the possibilities of
developing into a fast-growing computer cottage industry
in the late 1990s and the next century. What intelligence
and law enforcement policy-makers must accept is that the
proliferation of information security technology, like
that of nuclear, biological and chemical warfare
technologies, is a matter of when not if (Scallingi
1995).
The upshot of all of this is that just as information
technology may indeed have provided a myriad of new
opportunities for SIGINT, as well as the emerging field
of HACKINT, it has also provided the stuff of
countermeasures against SIGINT and HACKINT in the form of
readily available and relatively inexpensive encryption.
The most easily accessible back-door to any secure
information network is not some quirk in the code or
buried programmer's secret entrance, but the people
operating it. Human agents can provide their own
passwords to their controllers, upload sniffer programmes
to provide their controllers with passwords used by
others, or simply access systems themselves, download the
data and then pass it on to those controllers. Human
sysops can be turned, suborned, influenced or bought
outright to provide the keys to their forbidden
cybernetic city. Coupled with technical operations such
as intercepting satellite streams, tapping the landlines
and 'hacking in' from outside with agent-acquired
legitimate protocols to guarantee entry Ð HUMINT in fact
provides precisely the cutting edge HACKINT and SIGINT
are likely to need to continue to be the bulk producer of
raw intelligence in a world of 'cheap and cheery'
information security. It is precisely a fear of
widespread relatively strong cryptosystems for the
masses, criminal or otherwise, which has driven the
debate over key-escrow encryption, first in the United
States over the ill-fated Clipper Chip, and more recently
in Europe, with the UK only recently developing its own
programme under the auspices of both GCHQ and the
Department of Trade and Industry (Communications
Electronics Security Group 1997). The debate in America
essentially devolved into two schools of thought based on
different articles of faith meeting at loggerheads. On
the one hand, libertarians feared increasing encroachment
into private communications by government, while on the
other side, law enforcement agencies were facing the loss
of much of the intelligence and prosecutable evidence
that communications intercepts have always provided.4 By
defeating key-escrow legislation, its opponents have in
fact created a new problem, that constitutes a no less
pernicious problem of civil liberties: denied access to
encrypted communications through a law enforcement access
field, or keys held by a trusted third party, law
enforcement and intelligence agencies will be forced to
recruit more and better placed human sources.
Co-conspirators, accessories, friends, relatives, lovers
and spouses. Indeed, as one Canadian national security
community official has recently observed, of all the
intelligence gathering measures available to domestic
intelligence services, the recruitment of human sources
is 'oneof, if not the, most intrusive means available'
(Whitacker 1996: 284-5). A number of Western democracies
have had cautionary lessons that they are not immune to
overzealous domestic surveillance. The United States, for
example, has legacy of the Hoover years at the FBI and
COINTELPRO, while in 1921 the British Secret Service
Committee abruptly abolished Basil Thompson's civilian
Directorate of Intelligence after only two years of its
existence. Their fear was that Thompson's wholesale
informant-recruitment in all walks of life was moving
towards a 'Continental system of domestic espionage'
(Hinsley and Simkins 1991: 6). The recruitment of human
sources, moreover, typically does not require the
stringent legal controls of judicial or political
warrants which are required for communications intercepts
or the release of coding keys held in escrow. Thus, it is
far from clear whether in defeating Clipper in America
civil libertarians have, in the long run, struck a blow
for or against the civil liberties they strive
to protect.
HUMINT Vs TECHINT III: SPECIAL OPERATIONS AND
(DIS)INFORMATION WARFARE
Special operations, or covert action, cover an extremely
wide range of tasks. The major covert action functions
include: sabotage, in which facilities are destroyed or
incapacitated; disinformation, in which false or
misleading information is disseminated with an eye to
deceiving a target; influence operations, in which
propaganda may be disseminated (deceptiveor not) or
clandestine support may be provided to individuals or
groups in a political arena with the intention of skewing
the results of a decision or an election in a particular
direction; disruptive action, in which 'you set people
very discretely against one another'; and special
political actions on the scale of engineered coups in
which combinations of influence,deception and
intelligence are used to overthrow one government and
replace it with another. All of these tasks rely on good,
comprehensive intelligence from all possible sources,
overt as well as covert, to be executed effectively.
The potential of information warfare for special
operations has probably been the most major single
concern in the literature on the subject. Much has been
written and said about the threats from logic bombs and
computer viruses which disrupt, corrupt or even
physically disable information systems (Arquilla 1998;
Arquilla and Ronfeld 1996; Devost, Houghton and Pollard
1997; Haeni and Hoffman 1996; Johnson 1997; Libicki 1995;
Molander, Riddile and Wilson 1996; Schwartau 1996).7 A
great deal has been made of the potential of so-called
'cyberwar', but there is very little evidence that
viruses can live up to the hype wrapped around them. To
be sure, there have been embarrassing incidents where
particularly destructive viruses have been distributed
accidentally with brand new 'shrink-wrapped' commercial
software and the expensive consequences of the so-called
'internet worm'. In Israel in 1988 a virus was discovered
designed to prompt infected programmes to erase all of
their files on May 13 of that year, that is, the fortieth
anniversary of the 'demise of Palestine' (Kupperman 1991:
92). As pointed out by Kupperman, however, this latter
device was little more than a 'weapon of political
protest', and the potential impact of individual viruses
appears to amount to little more than the cybernetic
equivalent of blowing up trains by the Resistance.
The analogy between destructive information technologies
and wartime sabotage goes considerably further than just
individual actions behind enemy lines. During the Second
World War, the sabotage measures undertaken by the
European Resistance movements in conjunction with the
Allied secret services, in particular the British Special
Operations Executive (SOE) and the American Office of
Strategic Services (OSS) were closely coordinated to act
in support of overt, strategic actions. This was most
strikingly so in the case of French groups mobilized in
conjunction with the D Day landings in June 1944,
attacking German lines of communication and logistical
support (see, for example, Stafford 1983: 128).
Likewise, it would seem most effective to combine
cyber-sabotage with other political action methods at a
theatre or campaign level. There is a general sentiment
that this has yet to happen, and much of the literature
deals in 'scenarios' and 'simulations'. However, there is
good reason to believe that such an information campaign
has already occurred in the context of a larger
disinformation offensive during the Cold War.
One fact which the literature on the USSR's
disinformation and disruptive actions has not taken into
account is that by the end of the Cold War the USSR and
its Soviet Bloc were the world's largest single source of
computer viruses. Of the viruses of known origin (371
types, not including variants; of unknown origin 344),
124 originated from the Soviet Bloc (USSR/Russia and
Eastern Europe). That is, of the viruses of known origin
in circulation, 33.4 per cent of them originated from the
Soviet Union and Eastern Europe. Within the Soviet Bloc,
the USSR was the largest single source, followed closely
by Bulgaria and then Poland.8 This is despite the fact
that the USSR had a considerably lower per capita
availability of computer facilities than Bulgaria, let
alone the Western states (Bontchev 1995). Although there
is no conclusive evidence tying the KGB or GRU to the
computer-virus explosion at the end of the Cold War,
these figures are none the less highly suggestive. Even
if the heavy percentage of Soviet Bloc-originated viruses
does indeed represent a coordinated effort rather than
being, as Vasselin Bontchev has suggested of the
Bulgarian viruses, simply the result of a small number of
mischievous or vindictive individual programmers
(Bontchev 1995), the fact remains that the presence and
wide distribution of viruses throughout the second half
of the 1980s amounted to little more than a background
noise of nuisance value. The disruptive possibilities of
cyber-sabotage may be a genuine cause for some concern,
but like the efforts of SOE and the OSS and the various
European Resistance movements in 1944, they are unlikely
to be an unambiguous war winner, and like the viruses and
bacterial munitions of biological warfare there is a very
real risk of blow-back. Another potential of modern
information technology for deceptive, disruptive and
political actions lies in the internet as a means of
disseminating information or rather, disinformation. As
noted above, much of the data and communications traffic
travelling behind the world's firewalls and passwords is
enclaire, and politically useful information is not
always top secret with a codeword. That background hum of
electronic correspondence can all too easily reveal the
jealousies and coalitions that drive the organizational
politics of a target group or community and can be at
least as useful in acting against them as the details of
their latest and most secret operational plans. As former
SIS officer Baroness Daphne Park observed in 1993 about
disruptive action: Once you get really good inside
intelligence about any group you are able to learn where
the levers of power are, and what one man fears of
another . . . you set people discretely against once
another . . . They destroy each other, we don't destroy
them.(BBC 1993)
This is done, she suggests by way of example, by
circulating deceptive information about members or groups
within the target group which exploit and exacerbate
pre-existing divisions. The same strategy was pursued by
the Soviet Union in its 'active measures' disinformation
and propaganda campaign during the Cold War by making US
and NATO strategy appear threatening to the lives and
welfare of Western citizens. This ranged from using
various shades of propaganda to suggest that policies
such as the Strategic Defence Initiative ('Star Wars')
was a direct threat to the survival of Western civilians
by increasing the risks of a nuclear conflict (Heather
1987) to fraudulent claims and evidence that US
biowarfare research had led to the creation and release
of the Human Imunodeficiency Virus (Andrew and Gordievsky
1991: 529Ð30). The rich stream of e-mail and newsgroup
traffic within organizations and communities is an ideal
medium from which to glean those divisions and
hostilities which disruptive action might exploit. The
next problem is to generate plausible deceptive materials
and then to disseminate them, and in both cases the
'information superhighway' provides a highly promising
medium.
There are two main problems to be overcome in
circulating disinformation, either as part of a specific
disruptive action or as part of a pervasive 'active
measures' style campaign: first, getting the
disinformation to the people you want to influence, and
second, ensuring that information's credibility will be
sufficient to convince them. The internet has vast
potential for the first of these problems, but presents
real problems in terms of the second. To be sure, it is a
communications system with global coverage, and the costs
of distributing information over the internet are
negligible compared with the breadth of resulting
distribution, especially when compared to alternative
means. On top of this, digital information production and
storage also open up previously unmatched opportunities
for the falsification of information. As has already been
noted in the RAND report: 'Political action groups and
other non-governmental organisations can utilize the
internet to galvanize political support . . .Further, the
possibility arises that the very ÒfactsÓ can be
manipulated by multimedia techniques and widely
disseminated' (Schoben 1995). There have been recent, and
telling examples of this kind of action, of which perhaps
the most telling were the Malaysian riots that never
happened. In this case, a handful of individuals
circulated false reports of riots by Indonesian guest
workers in the Chow Kit district of Kuala Lumpur by
e-mail, reports which circulated faster than they could
be countered (New Straits Times 1998), and led to a very
real heightening of tensions in a country already shaken
by the shooting of illegal Acinese immigrants a few
months before.
The very opportunities offered by the internet are,
however, its weaknesses. The very ease of dissemination
by any group or interest, however ill-informed or
extremist, and the very ease of multimedia
misrepresentation and digital forgery can potentially
make information acquired on the internet a debased
currency. Even in the scientific community, the
reliability of scientific data available on the Internet
has come into doubt, with one survey of internet-using
research professionals noting that such information was
often flawed because of the following:
Units are frequently omitted;
Transcription errors are often encountered;
This leads to a need to find redundant data;
Very few sources have quality assurance statements;
Few of the Web data sites give the source of the data;
and
If they do, data are likely to be copied from outdated
sources.
(Wiggins 1996)
Given the limited credibility of materials available on
the internet, disinformation distributed by the World
Wide Web or by e-mail would have to supported by firm non
internet sources to be able to achieve and maintain the
kind of credibility that an effective deceptive action
requires.Thus the use of cyberspace as a medium of
disinformation would be most profitably exploited in
combination with other, more conventional human and
technical methods.
CONCLUSION: HUMINT, TECHINT AND THE FUTURE OF
INTELLIGENCE
It is, therefore, apparent that while technical methods,
in particular SIGINT and HACKINT, are likely to be the
large-scale providers of national intelligence 'take' for
the foreseeable future, this will have to be in the
context of a far closer integration with HUMINT. The twin
trends of proliferating, small-scale but, as Bruce
Hoffman has argued, disproportionately destructive,
intelligence targets, e.g. terrorism, proliferation,
transnational crime (Hoffman 1996) and a logarithmically
expanding global 'infosphere' (information superhighway,
cyberspace, call it what you will) mean that no simple,
straight-line shift from TECHINT back to HUMINT nor a
continuation of TECHINT's Cold War prevalence provides a
plausible picture of intelligence gathering in the next
century. As argued above, HUMINT's capacity to work
around the 'cheap and cheery' INFOSEC systems
increasingly available to terrorists, illegal arms
dealers and drug barons can provide precisely the window
of opportunity for communications intercepts and
cybernetic penetrations to continue to act as the main
bulk, raw intelligence producers. However, no matter how
much one falls back onto HUMINT, analysts and
policy-makers have always and will always prefer
information from technical sources (Laqueur 1985: 31).
Nevertheless, human sources are likely to prove a vital
point of entry past the 'cheap and cheery' information
security of the 1990s, a HUMINT vanguard finding a path
that the more powerful technical sources can follow. What
must be kept in mind is that HUMINT operations have very
real and often underestimated civil liberties
implications in the context of domestic criminal and
counter-intelligence operations.No one class of source
can be a panacea for intelligence needs in the coming
decades. As a result, the issue in allocating
intelligence resources in the 1990s and the 2000s is not
an either/or of technical versus human methods (indeed,
it has never really been so) but how to use them in
conjunction with a greater degree of unity, of common
targeting and coordination, than has previously been the
case.
Philip H.J. Davies
Department of Sociology
University of Reading
Faculty of Letters and Social Services
Whiteknights
Reading
RG6 2AA
UK
lwrdaphi@reading.ac.uk
NOTES
1 As a general rule in British literature,
the assorted '-ints' are considered
something of an American affectation, although their
convenience (but not, perhaps, their
clarity) has led to a more general usage in the field.
The term TECHINT derives from technicalintelligence, and
refers to any source of information derived from
technical or mechanical means Ð which virtually amounts
to anything not actually constituting a living human
source. As noted above, TECHINT can be very loosely be
divided into SIGINT, or signals intelligence, and IMINT
or imagery intelligence. Within IMINT lie PHOTINT, or
photographic intelligence and 'multi-spectral scanning'
(MSS) which employs infrared and
microwave imagery as well as visual wavelengths. SIGINT
in turn sub-divides into COMINT,
or communications intelligence (e.g. telecommunications
intercepts), and ELINT or electronic intelligence which
studies non-communicative emissions such as aircraft
radars or missile telemetry. In the following discussion
I have, for lack of a better alternative, employed the
term HACKINT for intelligence gathered from clandestine,
network-based computer access as compared with what is
sometimes called 'Network-Based Open-Systems'
intelligence or NOSINT, as popularized by figures like
Robert Steele of Open Systems Solutions, which relies on
overt access to Internet information from open sources
(some limitations of which will be addressed below in the
discussion of disinformation operations). For good,
standard discussions of the various 'int' categories see,
for example, Michael Herman (1996), Avram Shulsky (1989),
Jeffrey T.Richelson (1989) and Walter Laqueur (1985).
2. A number of commentators have noted this. Clifford
Stoll (1990) remarks upon
this, as have journalists such as Walker (1996).
Acknowledgements also to James Norminton who made this
point regarding an earlier version of this paper.
3 Hardware emanations remain one of the least discussed
aspects of information
security, while at the same time representing one of the
most substantial single points of
vulnerability in the current information infrastructure.
Although the vulnerability of cathode
ray tube (CRT) emissions is reasonably well known (hence
the fact that one's password is not
echoed verbatim on a computer screen as is one's user
name), the fact is that central
processors generate very considerable electromagnetic
field which can be intercepted and interpreted.NATO
standards for electromagnetic insulating of information
systems are referred to as TEMPESTing, but effective
TEMPESTing is expensive and most organizations do not
bother with this feature, even if they spend a great deal
on encryption and firewall services. A brief discussion
of this risk appears in Schwartau (1996: 221Ð31), and an
embarrassing account of ineffective TEMPESTing on
CIA-recommended Wang desktop computers recommended to the
SIS by the CIA appears in Urban (1996: 256).
4 By comparison, debate in Britain over the TTP Key
Escrow proposals issued
jointly by GCHQ's Communications Electronics Security
Group and the Department of Trade
and Industry have tended to focus not on civil liberties
but on the feasibility and potential
inefficiencies of the plan. See, for example, the CESG
homepage (CESG 1997), and the DTI's
'Licensing of Trusted Third Parties for the Provision of
Encryption Services' (DTI 1997). A
survey of the issue has also appeared in Ward (1997a.)
5 In Britain's 1985 Interception of Communications Act,
the 1989 Security
Service Act and the 1994 Intelligence Services Act,
communications intercepts all require
warrants 'signed by the Secretary of State', which in
British political parlance means
whichever Cabinet Minister whose jurisdiction covers the
location of the intercept, e.g. the
Home Secretary within 'the British Islands' and the
Foreign Secretary abroad. British
warranting procedures were subject to some criticism in
the 1995 Annual Report of the
Security Commissioner (1995), in which it was noted that
while the Intelligence and Security
Services are subject to the legal strictures of the 1985
IOCA, 1989 SSA and the 1994 ISA,
police communication intercepts are only covered by the
non-statutory 1984 Guidelines. Reg
Whitacker also notes that Federal Canadian reviews of
intelligence have proven consistently
resistant to extending the practice of judicial warrants
from communications intercepts to
human source recruitment (1996: 286). Human source
selection in Canada is handled by an
interdepartmental Targetting Approval and Review
Committee (TARC), while all MI5
operations, human or otherwise, require clearance by the
Security Service Legal Adviser, and
SIS operations need clearance from the SIS Foreign Office
Adviser.
6 By way of some degree of complication, both sides of
the Atlantic tend to use
different jargon, as well as making fine distinctions
within the field. Most critically, there is a
clear distinction between what the SIS calls special
political actions and the CIA terms covert
action, both of which are strictly political, and special
operations (employed by both) to denoteactions which
involve paramilitary action. In the CIA, this involved
two distinct Divisionswithin the Directorate of
Operations, Covert Action and International Activities
(formerly Special Operations) (Richelson 1989: 16Ð17).
The equivalent sections of the SIS were defunct by the
mid-1970s, with such actions being handled at a
geographical rather than central level,and in liaison
with either the Foreign and Commonwealth Office or the
Ministry of Defence(private information).
7 There is a particular fascination throughout this
literature with the issue of
terrorism, or of smaller powers clandestinely disrupting
the US military machine and 'critical
national infrastructure'. Arquilla's recent piece in
Wired (1998) deals with the danger of what
are known in intelligence circles as 'false-flag'
operations by extremist groups, with Devost et
al. (1997) being explicitly and centrally concerned with
terrorism. By comparison, Schwartau's
(1996) often alarmist style depends heavily on the
idiosyncratic and vaguely melodramatic
notion of the 'information warrior', a non-specific
creature composed in various parts of
terrorist, vandal and opportunist. Although Schwartau
provides an informative blow-by-blow
description of exactly how information security can be
compromised and that often based on
his own practical experience, for the most part much of
the literature depends heavily on
disturbing, often worst-case 'scenarios' and simulation
exercises of the RAND variety.
L. Scott Johnson's article is of considerable interest
since it appeared in the semi-annual,
unclassified version of the CIA's in-house intelligence
studies journal Studies in Intelligence.
However, Johnson's 1997 discussion is really yet another
threat-assessment of information
warfare, rather than any analysis of the role of
information warfare and ICTs in intelligence
policy and infrastructure. Much the same sentiment is
developed in DCI John Deutch's
briefing to the US Senate, excerpted in Schwartau (1996:
458Ð9).
8 This data is drawn from the F-Prot virus descriptions
database at
http://www.datafellows.fi/ in Iceland in August 1995. As noted, almost
half of the total number of viruses in circulation are of
unknown origin. It should also be noted that a small
number of the viruses in circulation originated after the
Cold War, such as 3APA3A which was found on a university
system in Moscow in 1994. It should also be noted that
the 1995 version of the F-Prot database distinguished
between post-Cold War Federal Russia and the USSR.
9. This is not necessarily a universal risk. China, for
example, has concentrated on
developing its information infrastructure on the basis of
a highly isolated 'intranet'
programme, rather than the open access adopted by most of
the rest of the world, providing
them with a potential national bunker in cyberspace from
which to conduct informationwarfare on their neighbours
or an inconvenient West with near impunity (Davies 1998).
REFERENCES
Adams, J. (1994) The New Spies: Exploring the Frontiers
of Espionage, London: Hutchinson.
Andrew, C. and Gordievsky, O. (1991) KGB: The Inside
Story of Its Foreign Operations, London: Hodder &
Stoughton.
Arquilla, J. (1998) 'The Great Cyberwar of 2002', Wired
6.02 February.
Arquilla, J. and Ronfeldt, D. (1996) Cyberwar is Coming,
available http://stl.nps.navy.mil/~jmorale/cyberwar.html
(15 January 1996).
Bamford, J. (1983) The Puzzle Palace, London: Sidgwick
& Jackson.
BBC (1993) 'On Her Majesty's Secret Service', Panorama
BBC Television, 22 November.
Blake, G. (1990) No Other Choice, London: Jonathan Cape.
Bontchev, V. (1995) The Bulgarian Virus Factories.
Available:
http://www.einet.net/galaxy/Engineering-and-Technology/Computer-Technology/Security/david-hull/bulgfact.html(20
December 1995).
Boren, D.L. (1994) 'The Intelligence Community: How
Crucial?', Foreign Affairs.
Campbell, D. (1997) 'More Naked Gun than Top Gun' The
Guardian,
26 November.
Communications Electronics Security Group, homepage
available:
http://www.cesg.gov.uk
(5 August 1997).
Davies, P.H.J. (1995) 'Organisational Politics and the
Development of Britain's Intelligence Producer/Consumer
Interface', Intelligence and National Security 10(4)
(October).
Davies, P.H.J. (1998) 'Infowar, Infosec and Asian
Security', Asian Defence and Diplomacy 4(9) September.
Department of Trade and Industry (1997) 'Licensing of
Trusted Third Parties for theProvision of Encryption
Services'. Available http://dtiinfo1.dti.gov.uk/pubs/
(5 August).
Der Derian, J. (1994) 'Cyber-Deterrence', Wired
September.
Devost, M., Houghton, B. and Pollard, N. (1997)
'Information Terrorism: Political Violence in an
Information Age', Terrorism and Political Violence 9(1)
(Spring).
Gibson, W. (1987) Count Zero, New York: Ace.
Haeni, R.E. and Hoffman, L.J. (1996) An Introduction to
Information Warfare, available
http://www.seas.gwu.edu/student/reto/infowar/info-war.html
(15 January).
Heather, R.W. (1987) SDI and Soviet Active Measures:
Mackenzie Paper No 14, Toronto: Mackenzie Institute.
Hinsley F.H. and Simkins C.A.G. (1991) British
Intelligence in the Second World War Vol IV:
Counter-Espionage and Security, London: HMSO.
Hoffman, B. (1996) 'Intelligence and Terrorism: Emerging
Threats and New Security Challenges in the Post Cold-War
Era', Intelligence and National Security 11(2) (April).
Johnson, L.S. (1997) 'Toward a Functional Model of
Information Warfare', Studies in Intelligence no. 1 1997
(unclassified edition).
Kupperman, R. (1991) 'Emerging Techno-Terrorism', in J.
Marks and I. Belaiev (eds) Common Grounds on Terrorism,
New York: W.W. Norton.
Landau, H. (1934) All's Fair: The Story of British Secret
Service behind German Lines, NewYork: Puttnam.
Laqueur, W. (1985) A World of Secrets: The Uses and
Limits of Intelligence, New York: Basic Books.
Levy, S. (1996) 'Wisecrackers', Wired 2.03 March.
Libicki, M. (1995) What Is Information Warfare?,
Washington, DC: US Government PrintingOffice, also
available http://www.ndu.edu/inss/actpubs/
(19 December).
Madsen, W. (1993) 'Intelligence Threats to Computer
Security', International Journal of Intelligence and
Counterintelligence 6(4) (Winter).
Molander, R., Riddile, A. and Wilson, P. (1996) Strategic
Information Warfare: A New Face of War, Santa Monica:
RAND.
New Straits Times (1998) 'Third Suspect Held for
Spreading Email Rumours', New Straits Times (14 August).
RAND Research Review Information Warfare and Cyberspace
Security (1995), availablehttp://rand.org/RRR/RRR.fall95.cyber
(19 December).
Richelson, J.T. (1989) The US Intelligence Community, 2nd
edition, New York: Ballinger.
Scallingi, P. (1995) 'Proliferation and Arms Control',
Intelligence and National Security 10(4) (October).
Schoben, A. (ed.) (1995) 'Information Warfare: A
Two-Edged Sword', Rand ResearchReview: Information
Warfare and Cyberspace Security, available:http://rand.org/RRR/RRR.fall95.cyber
(19 December).
Schwartau, W. (1996) Information Warfare, 2nd edn, New
York: Thunder's Mouth.
Security Service Commissioner (1995) 1995 Annual Report
of the Security Commissioner,CMD 2827, London: HMSO.
Shulsky, A. (1991) Silent Warfare: Understanding the
World of Intelligence, London:Brasseys.
Stafford, D. (1983) Britain and European Resistance
1940-1945, Toronto: University of Toronto Press.
Stengers, J. (1984) 'The French, the British, the Poles
and Enigma', in C. Andrew and D.Dilks (eds) The Missing
Dimension: Governments and Intelligence Communities in
the Twentieth Century, Chicago: University of Chicago
Press.
Stoll, C. (1990) The Cuckoo's Egg, London: Pan.
Toffler, A. and Toffler, H. (1995) War and Antiwar, New
York: Warner.
Verrier, A. (1983) Through the Looking Glass, London:
Jonathan Cape.
Walker, M. (1996) 'Datastream Cowboy Fixes Pentagon in
his Sights', the Guardian, 24 May.
Ward, M. (1997a) 'Coded Message Plan ÒToo ComplexÓ',
New Scientist, 26 April.
Ward, M. (1997b) 'Net Surfers Set Cracking Pace', New
Scientist, 28 June.
Ward, M. (1997c) 'The Secret's Out', New Scientist, 6
September.
Whitacker, R. (1996) 'The 'Bristow Affair': A Crisis of
Accountability in Canadian Security Intelligence',
Intelligence and National Security 11(2) April.
Wiggins, G. (1996) 'Data Needs of Academic Research on
the Internet'.available: http://www.indiana.edu/~cheminfo/gw/nist_csanewsl.html
(24 January 1997).
|