Center for High Assurance Computer Systems (CHACS) Publications

1996

Archer, Myla M. and Heitmeyer, Constance L. "TAME: A Specialized Specification and Verification System for Timed Automata," Presented in the Work in Progress session at RTSS '96, Washington, D.C., December 4-6, 1996. PostScript, PDF

Assuring the correctness of specifications of real-time systems can involve significant human effort. The use of a mechanical theorem prover to encode such specifications and to verify their properties could significantly reduce this effort. A barrier to routinely encoding and mechanically verifying specifications has been the need first to master the specification language and logic of a general theorem proving system. Our approach to overcoming this barrier is to provide mechanical support for producing specifications and verifying proofs, specialized for particular mathematical models and proof techniques. We are currently developing a mechanical verification system called TAME (Timed Automata Modeling Environment) that provides this specialized support using SRI's Prototype Verification System (PVS). Our system is intended to permit steps in reasoning similar to those in hand proofs that use model-specific techniques. TAME has recently been used to detect errors in a realistic example.

Archer, Myla M. and Heitmeyer, Constance L. "Mechanical Verification of Timed Automata: A Case Study," Proceedings of the 1996 Real-Time Technology and Applications Symposium, 1996. PostScript, PDF

This paper reports the results of a case study on the feasibility of developing and applying mechanical methods, based on the proof system PVS, to prove propositions about real-time systems specified in the Lynch-Vaandrager timed automata model. In using automated provers to prove propositions about systems described by a specific mathematical model, both the proofs and the proof process can be simplified by exploiting the special properties of the mathematical model. This paper presents the PVS specification of three theories that underlie the timed automata model, a template for specifying timed automata models in PVS and an example of its instantiation, and both hand proofs and the corresponding PVS proofs of two propositions. It concludes with a discussion of our experience in applying PVS to specify and reason about real-time systems modeled as timed automata.

Atkinson, Randall J., McDonald, Daniel L., Phan, Bao G., Metz, Craig W., and Kenneth C. Chin. "Implementation of IPv6 in 4.4 BSD," Proceedings of the 1996 USENIX Technical Conference, USENIX Association, San Diego, California, January 1996. pp. 113-125. PostScript, PDF

The widespread availability of the TCP/IP protocols in early versions of BSD UNIX fostered the current widespread use of those protocols in commercial products. Recently the Internet Engineering Task Force (IETF) has designed version 6 of the Internet Protocol (IPv6). IPv6 has some similarities with IPv4, but it also has many differences, most notably in address size. This paper describes our experience creating a freely distributable implementation of IPv6 inside 4.4 BSD, with focus on the areas that have changed between the IPv4 and IPv6 implementations.

Bharadwaj, Ramesh and Heitmeyer, Constance L. "Applying the SCR Requirements Specification Method to Practical Systems: A Case Study," Presented at the 21st Software Engineering Workshop, NASA GSFC, Greenbelt MD, USA, Dec 4-5, 1996. PostScript, PDF

Researchers at the Naval Research Laboratory (NRL) have been working on a formal method based on tables, known as the Software Cost Reduction (SCR) method, to specify the requirements of practical systems. In this paper, we present an SCR requirements specification of a simplified mode control panel for the Boeing 737 autopilot. We use the autopilot mode control panel as an example for comparing and contrasting the SCR approach to requirements specification and analysis with the approach of Ricky Butler, who uses the language of SRI's Prototype Verification System (PVS). We conclude with a discussion of general issues such as the appropriate level of abstraction for documenting requirements, the choice of notation, the kinds of analyses that can be performed, and the role of tool support.

Goldschlag, David M. "Several Secure Store and Forward Devices," Proceedings of the Third ACM Conference on Computer and Communications Security, New Delhi, India, March 1996. PostScript, PDF

DoD system high enclaves are often isolated from systems at other security levels because the usual connectors (guards) are expensive to procure, integrate, accredit, and operate, and usually require a human in the middle to review the data flow, independent of direction. This isolation reduces the effectiveness of information systems. The secure store and forward devices described in this paper can be used to solve an important (yet tractable) half of the problem: moving data from LOW to HIGH without a human in the middle. These devices were expressly designed to be easy to accredit. Security critical function is both minimized and separated from non-security critical function to reduce the need for trusted components. A prototype implementation of one of these store and forward devices is described as well.

Goldschlag, David M., Carl E. Landwehr, and Michael G. Reed. "Agent Safety and Security," (Chapter 17) in Bots and Other Internet Beasties, Joseph Williams (editor), Macmillan Computer Publishing, 1996. PostScript, PDF

Automobiles have proven to be a wonderful invention; people all over the world depend on them. However, cars used improperly can cause injuries - especially those that serve as hiding places for bombs. It is hard to imagine a software agent that could cause physical harm to anyone - it is only software, after all. What if that software controls an electrical appliance, such as a coffee maker? Could a control failure cause the coffee maker to overheat and start a fire?

Goldschlag, David M., Michael G. Reed, and Paul F. Syverson. "Hiding Routing Information," Information Hiding, R. Anderson (editor), Springer-Verlag LLNCS 1174, 1996, pp. 137-150. PostScript, PDF

This paper describes an architecture, Onion Routing, that limits a network's vulnerability to traffic analysis. The architecture provides anonymous socket connections by means of proxy servers. It provides real-time, bi-directional, anonymous communication for any protocol that can be adapted to use a proxy service. Specifically, the architecture provides for bi-directional communication even though no-one but the initiator's proxy server knows anything but previous and next hops in the communication chain. This implies that neither the respondent nor his proxy server nor any external observer need know the identity of the initiator or his proxy server. A prototype of Onion Routing has been implemented. This prototype works with HTTP (World Wide Web) proxies. In addition, an analogous proxy for TELNET has been implemented. Proxies for FTP and SMTP are under development.

Heitmeyer, Constance L. "Requirements Specifications for Hybrid Systems," Proceedings, Hybrid Systems Workshop III, Lecture Notes in Computer Science, Springer-Verlag, edited by R. Alur, T. Henzinger, and E. Sontag, 1996 (to appear). PostScript, PDF

This paper presents a formal framework for representing and reasoning about the requirements of hybrid computer systems. By a hybrid computer system, we mean a computer system whose environment includes both continuous and discrete entities. As background, the paper briefly reviews an abstract model for specifying system and software requirements, called the Four Variable Model, and a related requirements method, called SCR (Software Cost Reduction). The paper then introduces a special discrete version of the Four Variable Model, called the SCR requirements model, and suggests how the SCR model can be extended to specify and to reason about hybrid systems. Examples of how the SCR model can be used to specify a system's timing and accuracy requirements are given.

Heitmeyer, Constance L., Ralph D. Jeffords, and Bruce G. Labaw. "Automated Consistency Checking of Requirements Specifications," ACM Trans. on Software Eng. and Methodology 5, 3, July 1996, 231-261. PostScript, PDF

This paper describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite state automaton, which produces externally visible outputs in response to changes in monitored environmental quantities. Results are presented of two experiments which evaluated the utility and scalability of our technique for consistency checking in a real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.

Kang, Myong H., Ira S. Moskowitz, Bruce Montrose and James Parsonese. "A Case Study of Two NRL Pump Prototypes," 12th Annual Computer Security Applications Conference, San Diego, 1996. PostScript, PDF

As computer systems become more open and interconnected, the need for reliable and secure communication also increases. The NRL Pump was introduced to balance the requirements of reliability, congestion control, fairness, and good performance against those of threats from covert channels and denial of service attacks.

In this paper, we describe two prototype efforts. One implements the Pump at the process (top) layer in terms of a 4-layer network reference model and the other implements the Pump at the transport layer. We then discuss lessons learned and how these lessons will be used in deciding upon the final hardware implementation of the Pump.

Kang, Myong H., Judith N. Froscher and Ira S. Moskowitz. "A Framework for MLS Interoperability," IEEE High Assurance Systems Engineering Workshop, Niagara-on-the-Lake, 1996. PostScript, PDF

Distributed object-oriented computing (DOC) is a new computing paradigm that promotes component-based development, location independence, scalability, software reuse, etc. Users of multilevel security (MLS) technology want to take advantage of these new technologies. However, the process of incorporating new technologies into MLS products is slower than the analogous process for non-secure commercial products because MLS products must go through rigorous evaluation/certification procedures.

We propose an architectural framework that speeds up the process of introducing new technologies to MLS users. We examine the drawbacks of traditional MLS approaches and take a fresh look at the requirements of MLS users. We then introduce security-critical components that can enable MLS solutions and an MLS architectural framework that can accommodate not only legacy systems but also new technologies, including DOC, without jeopardizing system security. Our framework separates security critical components/functions from the rest of the system because these components must go through rigorous evaluation/certification processes. This approach enables the secure use of new technologies for MLS users.

Kang, Myong H., Ira S. Moskowitz and Daniel C. Lee. "A Network Pump," IEEE Transaction on Software Engineering, Vol. 22, No. 5, May 1996. PostScript, PDF

A designer of reliable multi-level secure (MLS) networks must consider covert channels and denial of service attacks in addition to traditional network performance measures such as throughput, fairness, and reliability. In this paper we show how to extend the NRL data Pump to a certain MLS network architecture in order to balance the requirements of congestion control, fairness, good performance, and reliability against those of minimal threats from covert channels and denial of service attacks. We back up our claims with simulation results.

Klinker, James E. "Multicast Tree Construction in Directed Networks," Proceedings of IEEE MILCOM '96. PostScript, PDF, HTML

Significant interest exists within the military in moving towards an integrated services environment where traditional network services such as ftp, telnet, and e-mail can co-exist with real-time services such as voice, video, and satellite imagery. Multicast routing is an effective means of providing the efficient utilization of network resources required to realize such an environment.

Traditional multicast routing algorithms assume a symmetric network topology. Many military communication assets are either asymmetric in their load of asymmetric in capacity (a good example is Direct Broadcast Satellite). In addition, many military communication assets are bandwith constrained, and routing symmetrically may further contribute to congestion. Therefore, multicast tree construction which tolerates network asymmetry is desirable for many military communication environments.

This paper proposes an algorithm for constructing shared multicast distribution trees in networks with asymmetric link capacities or loads. The algorithm tolerates asymmetry by building distinct, loop-free, sender and receiver paths onto a shared delivery tree. Additionally, the algorithm exhibits desirable security properties. Simulation results are presented that demonstrate the lower tree cost and better load balancing characteristics of the resultant trees over shortest path trees, with only a modest increase in path length.

Macker, Joseph P. "Controlled Link Sharing and Quality of Service Data Transfer for Military Internetworking," Proceedings of IEEE MILCOM '96. PostScript, PDF

This paper discusses system design issues related to enhancing present internetworking architectures to achieve controlled link sharing and high assurance data interchange guarantees. The military services are implementing both wired and wireless Internet Protocol (IP) based data networks to provide interoperable, heterogeneous network connectivity. At present, internetwork routing products forward network data traffic with limited concern for the link sharing policies or the specific quality requirements of the traffic flow. An enhanced Integrated Services IP architecture is emerging which provides solutions for a rich set of resource sharing requirements. We present an overview of this architecture and discuss performance issues for candidate system components in a military context. The strong conclusion is that, based upon recent research and emerging technologies, a dynamic mixture of of guaranteed services and controlled link sharing is achievable over operational packet networks. We recommend future work to validate candidate servicing models and to understand military application, security, and policy management requirements within this enhanced architecture.

Macker, Joseph P., James E. Klinker and M. Scott Corson. "Reliable Multicast Data Delivery for Military Networking," Proceedings of IEEE MILCOM '96. PostScript, PDF, HTML

Multicast networking support is becoming an increasingly important technology area for both commercial and military distributed or group-based applications. The underlying delivery mechanism for IP multicast is presently the User Datagram Protocol (UDP) or raw IP packets. At present, these mechanisms provide a "best effort" delivery service. Best effort implies that IP packets are treated with essentially equal weight, and while IP makes an effort to deliver all packets to their destination, packets may occasionally be delayed, lost, duplicated, or delivered out of order. In the past such delivery mechanisms have worked fine for supporting traffic insensitive to occasional lost or missing data (e.g., voice, video). An increasing variety of distributed multimedia applications are being developed in which a consistent and/or reliable data delivery of all or a subset of data packets is a critical performance factor. In future military tactical internetworks, situational awareness data will play a major role as a critical multicast application. Reliable group file transfer (e.g., image dissemination) and interactive mission planning applications are also likely applications for military mobile units.

This paper presents a taxonomy of presently reliable multicasting solutions. The protocols are classified in terms of performance issues and scalability. Using this taxonomy, reliable multicast solutions are considered for various military applications such as mission planning, Distributed Interactive Simulation (DIS), and situational awareness dissemination in a shared WAN environment.

McDermott, John P. and David Goldschlag. "Towards a Model of Storage Jamming," Proceedings of the Ninth Computer Security Foundations Workshop, Kenmare, Ireland, June 1996, pp. 176-185. PostScript, PDF

Storage jamming can degrade real-world activities that share stored data. Storage jamming is not prevented by access controls or cryptographic techniques. Verification to rule out storage jamming logic is impractical for shrink-wrapped software or low-cost custom applications. Detection mechanisms do offer more promise. In this paper, we model storage jamming and a detection mechanism, using Unity logic. We find that Unity logic, in conjunction with some high-level operators, models storage jamming in a natural way and allows us to reason about susceptibility, rate of jamming, and impact on persistent values.

McDonald, Daniel L., Phan, Bao G., and Randall J. Atkinson. "A Socket-based Key Management API for BSD UNIX," Proceedings of INET'96 Conference, Internet Society, Reston,VA , June 1996. PostScript

This paper presents an Application Programming Interface (API) which, in combination with interfaces presented to a networking protocol implementation, provides a set of abstractions allowing different session key management schemes to be securely built outside the operating system kernel. This permits a more modular key management implementation, which in turn facilitates a high assurance implementation of a key management protocol and permits system administrators to change or add key management modules more easily.

Meadows, Catherine A. "Language Generation and Verification in the NRL Protocol Analyzer," Proceedings of the 9th Computer Security Foundations Workshop, IEEE Computer Society Press, 1996. PostScript, PDF

The NRL Protocol Analyzer is a tool for proving security properties of cryptographic protocols, and for finding flaws if they exist. It is used by having the user first prove a number of lemmas stating that infinite classes of states are unreachable, and then performing an exhaustive search on the remaining state space. One main source of difficulty in using the tool is in generating the lemmas that are to be proved. In this paper we show how we have made the task easier by automating the generation of lemmas involving the use of formal languages.

Meadows, Catherine A. "Analyzing the Needham-Schroeder Public Key Protocol: A Comparison of Two Approaches," Proceedings of ESORICS, Springer Verlag. PostScript, PDF

In this paper we contrast the use of the NRL Protocol Analyzer and Gavin Lowe's use of the model checker to analyze the Needham-Schroeder public key protocol. This is used as a basis to compare and contrast the two systems and to point out possible future directions for research.

Montrose, Bruce and Kang, Myong H. "An Implementation of the Pump: The Event Driven Pump,"NRL Memorandum Report 5540-96-7850. PostScript, PDF

As computer systems become more open and interconnected, the need for reliable and secure communication also increases. In this report, we discuss a communication device, the NRL Pump, and introduce an implementation of the Pump: the Event Driven Pump that balances the requirements of reliability and security. The Pump provides acknowledgments (Acks) to the message source to insure reliability. These Acks are also used for flow control to inhibit the Pump's buffer from becoming or staying full. This is desirable because once the buffer is filled there exists a huge covert communication channel.

We have prepared this report for system designers and programmers who want to understand the basic structure of the event-driven Pump. We also hope this report is helpful to the people who will maintain the Pump code. In this report, we assume that the reader is familiar with the material presented in previous Pump papers.

Moore, Andrew P. and Charles N. Payne Jr. "The RS-232 Character Repeater Refinement and Assurance Argument," NRL Memorandum Report 5540-96-7872, July 25, 1996. HTML

Past experience in system security certification indicates the need for developers of high assurance systems to coherently integrate the evidence that their system satisfies its critical requirements. This document describes a method based on literate programming techniques to help developers present the evidence they gather in a manner that facilitates the certification effort. We demonstrate this method through the implementation and verification of a small but non-trivial, security-relevant example, an RS-232 character repeater. By addressing many of the important issues in system design, we expect that this example will provide a model for developing assurance arguments for full-scale composite systems with corresponding gains in the expediency of the system certification process.

Moore, Andrew P. and Charles N. Payne Jr. "Increasing Assurance with Literate Programming Techniques," Proceedings of the 11th Annual Conference on Computer Assurance, June 17-21, 1996. PostScript, PDF

The assurance argument that a trusted system satisfies its information security requirements must be convincing, because the argument supports the accreditation decision to allow the computer to process classified information in an operational environment. Assurance is achieved through understanding, but some evidence that supports the assurance argument can be difficult to understand. This paper describes a novel application of a technique, called literate programming, that significantly improves the readability of the assurance argument while maintaining its consistency with formal specifications that are input to specification and verification systems. We describe an application of this technique to a simple example and discuss the lessons learned from this effort.

Moskowitz, Ira S., Greenwald, Steven J., and Myong H. Kang. "An Analysis of the Timed Z-channel, " Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, May 6-8, 1996. PostScript, PDF

Our timed Z-channel (a general case of the Z-channel) appears as the basis for a large class of covert channels. Golomb analyzed the Z-channel, a memoryless channel with two input symbols and two output symbols, where one of the input symbols is transmitted with noise while the other is transmitted without noise, and the output symbol transmission times are equal. We introduce the timed Z-channel, where the output symbol transmission times are different. Specifically, we show how the timed Z-channel applies to two examples of covert timing channel scenarios: a CPU scheduler, and a token ring network. We then give a detailed analysis of our timed Z-channel. We report a new result expressing the capacity of the timed Z-channel as the log of the root of a trinomial equation. This changes the capacity calculation from an optimization problem into a simpler algebraic problem and illustrates the relationship between the noise and time factors. Further, it generalizes Shannon's work on noiseless channels for this special case. We also report a new result bounding the timed Z-channel's capacity from below. Finally, we show how an interesting observation that Golomb reported for the Z-channel also holds for the timed Z-channel.

Reed, Michael G., Paul F. Syverson and David M. Goldschlag. "Proxies for Anonymous Routing," Proceedings of the 12th Annual Computer Security Applications Conference, San Diego, CA, December 1996. PostScript, PDF

Using traffic analysis, it is possible to infer who is talking to whom over a public network. This paper describes a flexible communications infrastructure, Onion Routing, which is resistant to traffic analysis. Onion routing lives just beneath the applications layer, and is designed to interface with a wide variety of unmodified Internet services by means of proxies. Onion routing has been implemented on Sun Solaris 2.4; in addition proxies for World Wide Web browsing (HTTP), remote logins (RLOGIN), e-mail (SMTP), and file transfers (FTP) have been implemented.

Onion routing provides applications independent, real-time, and bi-directional anonymous connections that are resistant to both eavesdropping and traffic analysis. Applications making use of onion routing's anonymous connections may (and usually should) identify their users over the anonymous connection. User anonymity may be layered on top of the anonymous connections by removing identifying information from the data stream. Our goal here is anonymous connections, not anonymous communication. The use of a packet switched public network should not automatically reveal who is talking to whom. This is the traffic analysis that onion routing complicates.

Syverson, Paul F. "Limitations on Design Principles for Public Key Protocols," Proceedings of the 1996 IEEE Symposium on Security and Privacy, Oakland, CA, 1996, IEEE CS Press, pp. 62-73. PostScript, PDF

Recent papers have taken a new look at cryptographic protocols from the perspective of proposing design principles. For years the main approach to cryptographic protocols has been logical, and a number of papers have examined the limitations of those logics. This paper takes a similar cautionary look at the design principal approach. Limitations and exceptions are offered on some of the previously given basic design principals. The focus is primarily on public key protocols, especially on the order of signature and encryption. But, other principles are discussed as well. Apparently secure protocols that fail to meet principles are presented. Also presented are new attacks on protocols as well as previously claimed attacks which are not.

Syverson, Paul and Meadows, Catherine, "A Formal Language for Cryptographic Protocol Requirements," Designs, Codes, and Cryptography, vol. 7, no. 1/2, pp. 27-59, 1996. PostScript, PDF

In this paper we present a formal language for specifying and reasoning about cryptographic protocol requirements. We give sets of requirements for key distribution protocols and for key agreement protocols in that language. We look at a key agreement protocol due to Aziz and Diffie that might meet those requirements and show how to specify it in the language of the NRL Protocol Analyzer. We also show how to map our formal requirements to the language of the NRL Protocol Analyzer and use the Analyzer to show that the protocol meets those requirements. In other words, we use the Analyzer to assess the validity of the formulae that make up the requirements in models of the protocol. Our analysis reveals an implicit assumption about implementations of the protocol and reveals subtleties in the kinds of requirements one might specify for similar protocols.


Back to the CHACS Publications Page.


Back to the Publications Page.