September 1999

Reliable and Trustworthy: The Challenge of Cyber-Infrastructure Protection at the Edge of the Millennium[1]

"The strongest lesson from recent history is how hard it is to get trustworthiness policy right."

Marjory S. Blumenthal


Marjory S. Blumenthal manages the 20-member Computer Science and Telecommunications Board of the National Research Council and its many expert project committees and staff. She designs, develops, directs, and oversees collaborative study projects, workshops, and symposia on technical, strategic, and policy issues in computing and telecommunications. With work addressing both the development and use of information technology, Internet policy and cybersecurity have been among her professional emphases since the late 1980s.

Critical infrastructure is a test of whether a comprehensive strategy for trustworthiness is possible. It pits the obvious virtues of coordination against the immaturity of technology, business, and understanding of the public interest that must be part of any solutions -- the pieces in the ever-turning kaleidoscope of the Internet. Premature judgment is easy, given the long history of private sector frustration with policy relating to at least some aspects of trustworthiness, and the arcane nature of the topic does not invite study. But if achieved, broader public debate over critical infrastructure could strengthen the foundations for the growing body of Internet-related public policy. Moreover, it marks the coming of age of information infrastructure. Whether people can depend on the networked information systems and whether some people will misuse them in ways that harm others have become issues driving research, businesses, and policy explorations. Effective policy requires, though, consensus on several thorny issues where many interests must be balanced in a coherent view of the public interest. Despite many directives and organizational steps, we find ourselves with relatively little substantive progress. And that should not be surprising.

Trustworthiness and Critical Infrastructure

"Trustworthiness" embraces security, reliability, safety, and privacy, each generating sets of concepts and controversies.[2] Making information (and communications) systems more trustworthy implies increasing the likelihood that they will do what they are supposed to do and not what they are not supposed to do. Simplifying somewhat, more technical progress has occurred in reliability (and safety) than in security and privacy. It is comparatively easy to assess problems in reliability -- computer system crashes and telephone system outages are detected easily, for example, and motivate complaints -- and to address the underlying system and environmental factors that contributed to the problem. But security and privacy problems can be hard to recognize, let alone prevent. Both accidents and intentional actions compromise systems, and intent -- a key factor behind security and privacy breaches -- is hard to predict and control. Hence the elements of trustworthiness are not absolutes: they are intangible, they vary by degree, and they are subjective objects of assessment.

Promoting trustworthiness involves technology, people, and policy. The difficulty of making progress can be seen in complaints by experts about the amount and kind of private action taken to increase privacy and, in particular, security by either vendors or users of information technology. Previously identified problems persist, only to be exploited again and again, and known fixes are not deployed. These circumstances recur in the context of the Internet and commercial computing systems, and they are multiplying, thanks to increasing use of -- and dependence on -- information infrastructure. We now know that the experts were wrong to warn that the global information infrastructure could not take off without adequate protections -- obviously, it has. But the lack of endorsement of their concern does not diminish the reality of the underlying problems: The trustworthiness of information systems -- from virus attacks and popular software security flaws to both recreational and politically motivated hacking of Web sites by domestic and foreign parties -- is the stuff of daily news. And what gets reported is a fraction of what is happening. Assuming that experts will continue to recommend more or less ideal solutions, and to favor prophylaxis over coping, it is important to understand what combinations of technology, behavior (people), and policy might work. The current efforts surrounding critical infrastructure, although not designed as such, provide a laboratory for exploring just that.

The critical infrastructure initiative of the late 1990s blends information security and national security/emergency preparedness themes, among others.[3] In some ways, it is a response to the initiatives that promote information infrastructure: it is a call to caution, balancing the benefits of global reach with domestic if not international protection. Critical infrastructure represents a new attempt to frame the trustworthiness problem and integrate national security and commercial and consumer concerns. Its name reflects a long search for semantics that facilitate communication between policy-makers and the public. "Information warfare," with its offense and defense modifiers, sounds less neutral, and even growing discussion of industrial espionage in the business press does not expand its appeal. "Assurance" has become the key term in critical infrastructure protection, both because it can mean many things and sounds as reassuring as "insurance."

Critical infrastructure was launched publicly in July 1996 via Executive Order 13010 establishing the President's Commission on Critical Infrastructure Protection (PCCIP) to assess and respond to vulnerabilities and threats across a set of more or less interdependent infrastructures: telecommunications and information services, electric power, oil and gas distribution, transportation, water, banking and finance, emergency services, and government services. Although intended to broaden the circle of decision-making mechanisms, the commission's structure and participation depended heavily on participation of information security, national security, and law enforcement players as members, staff, and participants in its steering committee of senior federal officials. It had difficulty engaging senior industry executives as members, but an industrial advisory committee and public outreach efforts elicited private sector inputs.

The commission produced an October 1997 report, Critical Foundations,[4] which found that while today there is "no smoking keyboard," the time is right for public and private action because of falling cost and increasing availability of "attack capability" (i.e., skills and tools). Critical Foundations emphasized the lack of good information on various aspects of trustworthiness and the private sector's role in collecting and sharing information about risks -- both public and private sectors depend on such information from the private sector about vulnerabilities and their exploitation. The report called for public-private partnership. It also pointed to the role of the legal system in providing a framework for action affecting both sectors. Reinforcement was provided by a contemporaneous White House document emphasizing reliability entitled Cybernation.[5]

The key outgrowth of PCCIP was the May 1998 Presidential Decision Directive (PDD) 63.[6] Setting goals that cut across public and private sectors and levels of government, it established an elaborate new bureaucracy, building on existing entities. The Department of Commerce, leveraging the policy responsibilities of its National Telecommunications and Information Agency (active in information infrastructure and electronic commerce initiatives), has the lead role for the information infrastructure generally, and the Department of Defense, via the National Communications System and the National Security Telecommunications Advisory Committee, for national security/emergency preparedness aspects. The National Security Agency was directed to continue to assess risks to federal systems; the Departments of Commerce and Defense and the General Services Administration to help federal agencies implement best practices; and the Office of Management and Budget to consider aspects relating to the Government Performance and Results Act. A novel feature is the parallel complex of private sector participants and private-sector-driven Information Sharing and Analysis Centers. The notion of designating single entities to represent large and/or diverse industry and public interest segments raises many questions, beginning with whether it is as feasible to establish "focal points" in the private sector as it may be within the federal government.

Several actions have followed PDD 63. A new Critical Infrastructure Assurance Office was established to spearhead planning.[7] In January 1999, the president announced initiatives fostering "critical infrastructure applied research" (in recognition of current technological inadequacy), broader implementation of intrusion detection systems among federal agencies, launch of information sharing and analysis centers (to promote computer security "best practices" and vulnerability-information sharing), and the establishment of a "cyber corps" (to stimulate supply of people with relevant expertise and available to the federal government).[8] In July 1999, another executive order established a National Infrastructure Assurance Council composed of private sector leaders of critical infrastructure sectors, supported by the National Security Council, and aimed at advising federal agencies. The growing set of plans and programs appear to build a growing web of technology, people, and policy.

In principle, comprehensive and powerful action is enabled by the recent executive orders. PCCIP and CIAO, in particular, link national security, law enforcement, government requirements, and economic security. In practice, all that is evident are plans and structures. And leadership of the critical protection initiative, in fact, seems less settled than it may look. CIAO, at the hub of the new PDD 63 structure, responds to the PCCIP recommendation for a national focal point to provide public awareness and meet specific federal government needs, but maintains a low profile. The Federal Bureau of Investigation (FBI) has expanded its warning organization to a full-scale National Infrastructure Protection Center;[9] the Department of Defense has set up an organization under the Assistant Secretary for Command, Control, Communications, and Intelligence responsible for critical infrastructure protection;[10] the Department of Justice has been assessing a growing set of related legal issues; and National Security Council officials have promoted critical infrastructure protection vocally.

Together, the constellation of government organizations and their approach emphasize information security, albeit melded with reliability concerns, and they couple national security with law enforcement. That linkage concerns civil liberties advocates, who look beyond the critical infrastructure protection benefits of collecting certain information to compromises in privacy of personal information and access to government information arising from proposed conditions of that collection.[11]

Digging Through the Swamp

U.S. public policy for trustworthiness is colored -- some would say tainted -- by policy history relating to information security. There are two reasons: Security contributes to other elements of trustworthiness -- security mechanisms can protect privacy, safety, and reliability, for example -- and national security is such a dominant policy force. Controversy over cryptography policy is emblematic, because, put simply, that policy has treated government's security concerns, notably national security, as having higher priority than competing private sector interests relating to, say, privacy and competitiveness.[12] Trustworthiness also has roots in specialized telecommunications policy contexts -- notably National Security/Emergency Preparedness -- but also in such requirements as emergency 911 location-tracking for mobile telephone users,[13] where the emphasis has been on reliability and safety, although here, too, compromises to privacy arouse concern. Newer efforts relate to the Internet and electronic commerce (as discussed below), and they are broader in terms of both trustworthiness coverage and scope of policy generally. (For example, intellectual property is an issue in e-commerce, raising issues that go well beyond the security and reliability aspects of related protections and into questions of protected speech, individual privacy, and ownership of personal information.)

Broadening use of information systems has stimulated law enforcement involvement in information security policy generally and cryptography policy specifically during the 1990s. The Clipper Chip initiative of 1993, the Communications Assistance for Law Enforcement Act (CALEA) of 1994,[14] which facilitates wiretapping, and legislative efforts in the late 1990s relating to government access to encryption keys have showcased law enforcement concerns for access to digitally stored and communicated information. They also have raised questions about common cause between law enforcement and national security interests. The prime exponent has been the FBI.[15] Against this backdrop, it is not surprising that the mid-1999 anticipation of a Federal Intrusion Detection Network -- building on longstanding security community attention to intrusion-detection technology and procedures -- was denounced by people worried about its excessive scope for invasion of privacy.[16] We can all agree that "intruders" are bad, but what does it take to detect them, without mistaking good guys for bad? Although the FIDnet saga continues to unfold as this is written in late summer 1999, it illustrates shortcomings in public communication about government plans, the capacity for different approaches to design and operation of systems by the government, and uncertainty about the boundary between acceptable and unacceptable government activity to promote trustworthiness. Intrusion detection, per se, is not the issue: It is inherent in widely-used anti-virus software and in commercial software to manage large networks in the private sector. It is a second-best approach: Assuming intrusions cannot be prevented, the next-best step is to detect them. But the FIDnet controversy underscores that regardless of the appeal of trustworthiness objectives in the abstract, how those objectives are pursued affects popular response and political viability.

Trustworthiness policy debates are hampered by a structural asymmetry in government: There has been no government entity with formal responsibility to protect civilian information infrastructure. Diverse entities focus on government infrastructure and national security-related infrastructure, but civilian infrastructure is, in general, owned by the private sector and at most subject to sector-specific regulation. The new Critical Infrastructure Assurance Office is a partial response to the perceived need to protect civilian infrastructure. The National Institute of Standards and Technology within the Department of Commerce is often presumed -- incorrectly -- to be responsible for civilian information infrastructure protection, but its role is much narrower.[17] Its principal contribution to trustworthiness policy has been through standards-setting activities[18] and its hosting of the Computer System Security and Privacy Advisory Board has publicized and fostered debate on several contemporary issues.

Three themes emerge from contemporary policy-making in trustworthiness, which also inhere in critical infrastructure: the quests for private-public partnership, a better institutional approach, and the belief in the benefits of having more information. The PCCIP report invoked partnership as a mantra; it spawned new institutions, and new and old institutions are exploring how to develop more information about risks and responses. These themes reflect growing recognition by policy-makers that they have less control over information infrastructure than in the early and mid-20 century, and they signal an increasing tendency to frame policy as managing risk. Risk management implies judgments about the feasibility and desirability of preventing, insuring against, and/or self-insuring against potential problems. Common in many industries, it is tantamount to a concession that prevention is less attainable than some had hoped.[19]

The emphasis on collecting information -- developing sets of data, monitoring (surveillance?), and reporting -- is fundamental to public sector planning and private sector worries. Trustworthiness is tricky because it is hard, if not impossible, to measure the problem. There are more what-ifs than what-wases, and evidence ranges from anecdotes to opinion surveys -- something short of "data" in any rigorous sense. Lack of information leads to limited awareness and limited solutions in the market -- it prompts calls for more information.

Would more and better information make a difference? Would it induce a better market outcome? And would those results be worth the cost of getting it? These questions tend not to be asked in policy debates, where there seems to be a conviction that more information begets more action. Yet the repeated finding that even military systems -- manned by people obligated to be aware -- are inadequately protected against security problems raises questions about how much more awareness can help.[20] How people respond to the comparatively well-understood Year 2000 (Y2K) problem, where the "whole world is watching" for costs and consequences, will be an indicator.

Harder to measure is the response to publicity surrounding surveys about perceived on-line privacy: Some companies appear to be responding; a variety of watch-dog organizations have arisen to maintain visibility for the issue; governmental organizations are examining the situation, but whether people receive or are willing to pay for more protection remains to be seen. Of course, if awareness does not beget better behavior, so to speak, the advance of information technology suggests possibilities for depending less on human discretion: Automated information gathering can feed automated (and other) responses -- a possibility certain to feed civil liberties concerns, among others, as the FIDnet response suggests. The technology remains immature, buying time for further exploration of possible combinations of technology, people, and policy.

 A Legal Gloss: From Cold Warriors to Hot Lawyers

Although technologies relating to trustworthiness are converging, differences in the nature, culture, and maturity of policy-making for different aspects of trustworthiness militate against an integrated approach. However logical, the argument that policies should be rationalized and made consistent is overly simplistic and premature. Cybernation's question, "How can society be certain that critical infrastructure information networks are reliable enough?" is important, but given the choices and tradeoffs, disagreements about how much is reliable enough are considerable. Moreover, an information infrastructure industrial base marked by firms that are either relatively young and/or lacking in history or motivation for cooperating with the government confounds development of public-private consensus. This is a time for inquiry and analysis, to ascertain whether and where movement toward policy holism can work. Although a "trustworthiness czar" may be premature, lack of coherence in relevant public policy has consequences that argue for rationalization over time.

What makes the late 1990s special is the entry of a new set of players that complement the national security and law enforcement crew that shaped security history. A decade following the Computer Security Act of 1987, which attempted to effect more balance between the Departments of Defense and Commerce in security, the Federal Trade Commission (FTC) has emerged as a new trustworthiness player. In mid-1998, the FTC proposed legislation (absent satisfactory self-regulation by industry) that would require on-line data collection to comply with "widely-accepted fair information practices."[21] In mid-1999, the FTC decided to defer pursuit of legislation based on evidence of industry improvement, although some privacy advocates would have liked more progress. The FTC's suggestion that "Web sites would be required to take reasonable steps to protect the security and integrity of that information"[22] illustrates how an agency not historically active in computer or network trustworthiness might advance information security faster than more traditional and focused programs. Similarly, the mid-1998 interpretive release of the Securities and Exchange Commission (SEC), another nontraditional player in trustworthiness, establishes a broad requirement for public companies to disclose their Year 2000 readiness, costs associated with inadequate readiness, risks, and contingency plans.[23] It illustrates the potential to build on conventional business mechanisms (others beside disclosure include auditing and insurance) to promote trustworthiness, and the facilitation of private action by reinforcing accountability.

The FTC phenomenon reflects the policy complications arising from the combination of information (content) and systems (the nature of the technology and services) in trustworthiness. Information security and national security/emergency preparedness policy have tended to focus on the nature of technology. By contrast, privacy policy emphasizes the information, per se, as well as how it is handled. It is not hard to see that a variety of issues relating to information policy (including intellectual property rights and freedom of speech as well as privacy) could bear on trustworthiness. They involve security mechanisms in implementation, and they affect popular perceptions of information infrastructure -- the willingness to trust technology. Accordingly, technology and information policy are melding not only at the FTC but also elsewhere, such as at the Federal Communications Commission, which moved in mid-1999 to promote software to filter offensive material on the Internet and has been pressured to move into regulation of such content (something it has not done).

Both information policy and trustworthiness policy feed on the growing value of information in an economy dominated by the Internet, the potential for technology to separate ownership from access or use of information, and the consequences of alternative patterns in the flow of information. These factors evoke numerous tradeoffs. They also contribute to the proliferation of law and legal action relating to trustworthiness. This trend is unfolding almost apart from the administrative action and programs that seem to dominate information security, national security/emergency preparedness, and critical infrastructure. It is consistent, however, with information policy, which is characterized by legal frameworks and dispute resolution.

PCCIP undertook a comprehensive review of laws and regulations relevant to critical infrastructure protection.[24] Its emphasis on concern about criminal conduct and the potential for deterrence through law is symptomatic of a trend toward criminalizing activity adverse to trustworthiness -- how those judgments are made is becoming a source of concern among civil liberties advocates and people seeking to promote benign manipulation of information and systems.

Criminalization concerns were amplified by the August 1999 Executive Order establishing a working group on "unlawful conduct on the Internet." Composed of a wide range of agency heads, the group is charged with assessing federal laws and technical tools relevant to investigating and prosecuting unlawful conduct on the Internet. Interestingly, the announcement drew more comment in Internet exchanges than seemed available from otherwise knowledgeable officials at the time. From the outside there are fears of a monolithic bureaucracy; on the inside, there is considerable diversity of knowledge and outlook.

A backdrop for the criminalization trend is the growing incidence of computer-related crime, evident in corporate espionage and hacking, and extending to organized crime and crimes against children. Computer crime experience over the decade has shown that information systems can be a target of crime, an instrumentality, or a repository or vehicle for evidence relating to crime. These systems can magnify the scale and impact of crime, and they pose challenges relating to anonymity, jurisdiction, and evidence. Such are the concerns that drive law enforcement interests as they participate in shaping trustworthiness policy. In the wake of the 1988 Internet Worm, a computer crime unit was established in the Department of Justice and soon expanded to include related intellectual property protection (another instance of the links between trustworthiness and information policy).[25]

Defining what constitutes criminal activity remains an ongoing process, often advancing as a result of specific case experience. Actions in several quarters are helping to define civil and criminal liability, which affects information systems businesses and the behavior of people as they design and use information systems. For example, resolution of liability issues is key to progress in public key infrastructure, an enabler of broader use of cryptography in electronic commerce, while the Year 2000 problem has propagated attention to cyberlaw issues across the economy and, in particular, within the legal community. Another broad-based development is the evolution of a potential piece of state-level legislation, the proposed Uniform Computer Information Transactions Act, which could alter the balance of consumer and vendor rights and responsibilities for software, the critical element in experienced trustworthiness.

While what is wrong is being defined by law, private rights are championed by advocacy groups and remain ambiguous. Debates continue over whether U.S. citizens have a right to privacy; there is a related debate regarding anonymity.[26] Policymaking with regard to trustworthiness reinforces these debates, because of new concerns about how law enforcement will be conducted and at what cost to such other trustworthiness concerns as privacy. Industry advocates and cyberlibertarians argue for a private right of choice for information technology, including private choices about technology for trustworthiness.

When Will We Be Ready?

Policy-making relating to critical infrastructure suggests attempts to be proactive, while U.S. law and policy generally favors a more reactive and private-sector-based (e.g., via contracts and/or tort litigation) approach. Much such activity takes place at the state level, where physical points of presence (e.g., individual consumers and businesses) can be found; numerous questions and opinions relate to what can best be handled at state or federal levels. An emphasis on private-sector action is evident in privacy-protection, where federal government efforts have supported self-regulation and the evolution of private-sector institutional policies to codify procedures and, ostensibly, best practices. Industry group responses are being offered for such trustworthiness challenges as filtering of offensive content[27] and privacy protection.[28] In some cases recent policy includes an aim of government leadership in establishing best practices: for example, in establishing CIAO, the White House intended that "[t]he Federal Government shall serve as a model to the private sector on how infrastructure assurance is best achieved and shall, to the extent feasible, distribute the results of its endeavors."[29] Although government (at all levels) has been criticized during the 1990s for lagging the private sector in understanding and use of modern information technology, cultivation of best practices within government should, at a minimum, contribute to policy-makers' insights into the challenges of making organizations and systems more trustworthy.

Nevertheless, private-sector skepticism will continue to color the policy-making context. Law enforcement participation in cryptography policy combined with broader concerns about privacy and free speech in electronic communications fed a 1990s surge in civil-liberties advocacy, including the establishment of vocal organizations that leverage Internet communications and staff legal skills. Consideration of any and all trustworthiness policy is and will continue to be contentious. Yet that contention promises greater balance over time.

Trustworthiness issues force questions about the merits of ex ante vs. ex post action, which differ when considering the individual components of privacy, security, reliability, and safety. Given the costs of regulation, the question within the trustworthiness domain is whether one can establish that the risk of substantial and irreparable harm justifies intervention, which has its own direct and indirect costs. The FTC decided to forbear on privacy after threatening intervention; its late 1990s actions regarding on-line consumer fraud (in connection with Year 2000 impacts on goods and services, Internet advertising, and use of the Internet for "a new generation of fraud that uses increasingly sophisticated technology") hint at more regulation or enforcement activity. Agreement on action has been easiest where the concern is about exploitation of children, a lightning rod in the development of Internet policy and a differentiator from the government and business -- adult -- concerns that dominate security and critical infrastructure.

Whereas Year 2000 problems are unfolding in a brief, known time-frame and involve relatively straightforward technology, the larger trustworthiness challenge is much less specific and more complex. As debate over legislative proposals for intellectual property protection and merchant responsibilities for software products illustrates, imperfect understanding of computer systems can yield language that would outlaw common, inoffensive conduct. That reality argues for continuing to have some lag between law and technology -- even in a world measured in "Internet time." Criticisms of new approaches to technology (e.g., the World Wide Web's Platform for Privacy Preferences, the Secure Electronic Transaction protocol for credit cards, or government-accessible keys proposed for encryption systems) have been linked to failings in the conceptual models -- including assumptions about how and by whom information and systems should be handled and assumptions about how much such behavior can be controlled -- and to attempts at one-size-fits-all solutions. The strongest lesson from recent history is how hard it is to get trustworthiness policy right. Cumulating experience, development of case law, and whatever progress toward public-private partnership actually happens will be important contributors to the elusive consensus required for trustworthiness policy holism.


[1] This paper reflects the personal views of the author and is not a statement of the National Research Council (NRC) or the National Academies, although it builds on the author's contributions to several NRC reports.

[2] Key concepts and technology attributes and issues are described in Computer Science and Telecommunications Board, National Academy of Sciences, Trust in Cyberspace (1999) [hereinafter Trust in Cyberspace]; Computer Science and Telecommunications Board, National Academy of Sciences, Computers at Risk: Safe Computing in the Information Age (1991) [hereinafter Computers at Risk]; aspects relating to cryptography are discussed in Computer Science and Telecommunications Board, National Academy of Sciences, Cryptography's Role in Securing the Information Society (1996) [hereinafter Cryptography's Role]. Those reports contain numerous references. See < >.

[3] For a fuller discussion of the policy history, see Marjory S. Blumenthal, The Politics and Policies of Enhancing Trustworthiness for Information Systems, Communication Law and Policy (4:4), Autumn 1999.

[4] President's Commission on Critical Infrastructure Protection, Critical Foundations, (1997) [hereinafter PCCIP Report].

[5] Like the PCCIP Report, Cybernation acknowledges at 2 that "[m]any of the recognized threats to the information networks supporting the domestic infrastructure have not actually been experienced."

[6] See Executive Office of the President, White Paper: The Clinton Administration's Policy on Critical Infrastructure Protection: Presidential Decision Directive 63 (1998) [hereinafter White Paper], available at <>, for a description of the directive, the goals, and the corresponding bureaucracy.

[7] See <>.

[8] Office of the Press Secretary, The White House, Fact Sheet: Keeping America Secure for the 21st Century: Computer Security and Critical Infrastructure (Jan. 22, 1999) <>.

[9] "This organization shall serve as a national critical infrastructure threat assessment, warning, vulnerability, and law enforcement investigation and response entity." Located in FBI headquarters, the center was launched in 1998 with a "mission is to serve as the U.S. government's focal point for threat assessment, warning, investigation, and response for threats or attacks against our critical infrastructures." See A Message from Michael Vatis, Chief of the National Infrastructure Protection Center <>.

[10] See Daniel M. Verton, DOD preps office for cyberdefense, Federal Computer Week, July 13, 1998. Note that 1998 plans for reorganizing the Department of Defense included a separation of intelligence from command and control management.

[11] The Electronic Privacy Information Center and the Center for Democracy and Technology exemplify this concern; see <> and <>.

[12] See Cryptography's Role.

[13] See John Markoff, Finding Cellular Callers in an Emergency, New York Times, August 17, 1998, at D5.

[14] P.L. No. 103-414 (1994) (codifed at 18 U.S.C. 2601 et seq.).

[15] The FBI runs a Washington Field Office Infrastructure Protection and Computer Intrusion Detection Squad. See Note that the FCC has become involved in the call-monitoring supported by the FBI through CALEA, recently attracting attention by extending the monitoring potential for cellular phones. See Stephen Labaton, More Police Power Over Cell Phones, New York Times, August 18, 1999, at A1, A4.

[16] See John Markoff, U.S. Drafting Plan for Computer Monitoring System, New York Times, July 28, 1999.

[17] Under the Computer Security Act of 1987, NIST is responsible for sensitive but unclassified information systems within the federal government. As stated in the Act, "The Congress declares that improving the security and privacy of sensitive information in Federal computer systems is in the public interest, and hereby creates a means for establishing minimum acceptable security practices for such systems, without limiting the scope of security measures already planned or in use." Those practices include standards (FIPS) and plans and programs within federal agencies, both areas where NIST was given a lead. The Act also says that NIST is authorized "to assist the private sector, upon request, in using and apply the results of the programs and activities" under the Act.

[18] It produces Federal Information Processing Standards (FIPS) and it promotes private sector evaluation of compliance with standards (National Voluntary Laboratory Accreditation Program; see <>) in areas that relate to security issues; these efforts are sometimes adopted by the private sector (e.g., FIPS defining the Data Encryption Standard and its implementation). NIST has contributed to the Internet Engineering Task Force Internet Protocol Security (IPsec) effort (e.g., by developing a World Wide Web-based interoperability tester).

[19] Risk management is expressed, for example, in intrusion detection.

[20] See Computer Science and Telecommunications Bd., National Academy of Sciences, Realizing the Potential of C4I (1999), and U.S. General Accounting Office, DOD Information Security; Serious Weaknesses Continue to Place Defense Operations at Risk (1999).

[21] Such practices are typically framed as including notice and awareness of how personal information may be used, choice and consent by the subject of such information, access and participation in potential uses, and security and integrity in the handling of such information.

[22] See Prepared Statement of the Federal Trade, Commissionn on Consumer Privacy on the World Wide Web before the Subcommittee on Telecommunications Trade and Consumer Protection of the House Comm. on Commerce (July 21, 1998), available at <>; Federal Trade Commission, Privacy Online: A Report to Congress n.160 (1998).

[23] The SEC has been issuing Year 2000 guidance since at least 1997. See Securities and Exchange Commission, Interpretation; Statement of the Commission Regarding Disclosure of Year 2000 Issues and Consequences by Public Companies, Investment Advisers, Investment Companies, and Municipal Securities Issuers (July 30, 1998) (Release Number 33-7558) <>; Gibson, Dunn & Crutcher LLP, SEC Issues Release Providing Additional Guidance on Year 2000 Disclosure Obligations (August 17, 1998). Anecdotal evidence indicates that the SEC directive has led most clearly to companies requiring their suppliers to sign documents attesting to their or their products' Y2K compliance.

[24] President's Commission on Critical Infrastructure Protection, Legal Foundations (1997). A series of 12 reports to the President's Commission on Critical Infrastructure Protection, available at <>.

[25] Other computer crime units reside in the FBI and Secret Service.

[26] See, e.g., Michael A Froomkin, Flood Control on the Information Ocean: Living with Anonymity, Digital Cash, and Distributed Databases. 15 U. Pitt. J.L. & Com. 395 (1996).

[27] See <>.

[28] Activity in this area ranges from development of the Platform for Privacy Preferences through the World Wide Web Consortium to the newly announced International Security, Trust, and Privacy alliance (see "Major Companies Join Forces to Solve Security, Trust, and Privacy Issues in Electronic Business" at <>.

[29] See White Paper, supra note 42 (discussing PDD 63).

Released: September 22, 1999
iMP Magazine,

Editor's Note: At the request of the author, the CSTB URL was added to footnote 2 on September 24, 1999.

Copyright 1999. Marjory S. Blumenthal. All rights reserved.

  Next Focus Story
subscribe iMP contents -- September 1999