ATTACKS: Testing

Attacks: testing

by Kenya L Rolland


Testing is a vital part of computer technology. Testing allows for the assurance that the system is operating as expected, and that there are no signs of malfunctioning associated with the network or software currently installed in the system. The human factor that is associated with the concept of testing recognizes the fact that people do make mistakes, and attackers anticipate these mistakes and exploit them when giving the opportunity. An attack can be initiated on a system that has failed to be shut down properly after testing, as well as by accessing a user ID that has not been logged off appropriately. This paper will explore some of the various ways that testing can create system vulnerabilities that are ideal for rendering attacks on an information system, as well as security measures that can be implemented to reduce the risk of attacks associated with testing.

Testing defined

Many computer technicians would stress the importance of having a personal computer regularly tested to ensure that the system provides the consumer with maximum efficiency. By having a computer tested routinely it is naturally assumed that any signs of potential disruptions will be detected and repaired immediately. However, during the testing process the system could be left vulnerable to an attack that did not originate in the system. The testing process does not provide absolute assurance that a system will never be infiltrated, because some of the methods utilized to conduct the test will leave the system vulnerable. Fred Cohen states that "testing can stress systems, inducing a period of time when systems operate differently than normal and may result in temporary or temporary permanent inappropriate or unsafe configurations. Current analysis of protection testing is based on naive assumptions, testing issues are quite complex and some known problems are exponential in time and space.[1]


Computers commonly experience ample testing sessions prior to purchase, assuring that the system has adequate memory and is responsive to commands. The primary goal of quality control personnel is to ensure that the computer has ample time and storage space to provide the functions that the consumer desires. People want computers that will generate rapid results and hold an enormous amount of information, however the price to be paid by the consumer for this type of demand is the substitution of quality for quantity. Fred Cohen states that "When programs are written and tested, programmers commonly use debugging options and bounds checking on arrays. The debugging option allows program elements to be traced as the program is executed so that the programmer can find errors and determine their origin. Bounds-checking on arrays performs tests every time memory is accessed to make certain that only the proper portion of memory is being used. If the program is perfect, debugging and bounds-checking perform no useful function, but programs of substantial size are almost never perfect, even after substantial testing. The problem comes when the program is ready to be delivered to the customer. In order to make the program as small and fast as possible, software manufacturers commonly turn off debugging and bounds-checking, recompile the programs, and ship them out."[2]


The major problem associated with testing computer systems is the inability to secure the system from being intercepted by others with unauthorized access and malicious intentions. The testing process causes the information system to operate differently than normal while the network has already been accessed by an authorized user, and makes detection of attacks by unauthorized users difficult to detect. Testing is usually unsuccessful on large globally networked systems because there is no mechanism in place for detecting intruders who are intentionally attacking the system. Intruders believe that the three easiest ways to penetrate a system are:
impersonate an authorized employee or vendor agent to affect the disclosure of sensitive access information or allow physical access to a facility housing critical systems take advantage of the defaults shipped with the system and its software fraudulently influence system hot line support personnel to give out information and or affect system changes e.g. the reset of a users password.[3]

Testing diffrences

Testing of information systems can be accomplished in various forms. Many information systems are now built with self-test mechanisms that are designed to provide testing capabilities during the systems normal operation without causing any damaging effects. A test that is designed to replace current systems is much easier to control than a test that creates new conditions. In order to prevent the exposure of vulnerabilities in the testing process a new environment should be created that correlates with the new conditions. During the testing process it is recommended that a vulnerability assessment be conducted on the information system in order to ensure that the security plan has been adequately designed. Regular assessments are needed not only because of newly discovered vulnerabilities daily, but because the day to day operations of any organization may result in the accidental introduction of a previously avoided vulnerability.[4] After the vulnerability assessment is completed, a penetration test is then recommended. The penetration test is a pro-active approach that is conducted in order to find any vulnerabilities that can be exposed by an authorized user, before an attacker has the opportunity to find the glitch in the system. The best way to describe the vulnerability assessment and penetration test is by comparing it to checking to see if all your doors and windows are locked, and then coming back later to shake the doors and rattle the windows to ensure that they are locked.[5]

Separation of duties

In order to ensure that computer systems are not easily infiltrated during the vulnerable state that is attributed to testing, strategies must be implemented to secure the system against issues associated with the consideration of the human factor in mind. The security measures utilized to protect the systems network must include a chain of command that does not allow one person the dual power of performing the task and reviewing its functioning ability. "The application programmer who codes, tests, and debugs programs in a new or modified system, is not to be the person who tests the completed programs in the systems tests. The reason is twofold: a) the programmer, being proud of his creation, consciously or unconsciously does not want to find anything wrong with his programs, and so he will overlook "inconsequential bugs". But inconsequential bugs have a way of becoming big trouble and cause systems to malfunction; b) the systems analysts, auditors, and the users can verify via the systems tests that the programs respond to their requirements."[6]
By separating the duties of individuals who have access to vital information stored in the computer system, a checks and balances approach is utilized in order to detect any anomalies that may have gone unnoticed by the individual responsible for initially performing the task.

Summary, Conclusions, and Further Work

It is evident that testing can provide unauthorized access to a computer's network and render it vulnerable to malicious attacks by intruders. However, testing is a vital part of maintaining information technology and can't be easily eliminated. Therefore, in order to ensure that reliance can be provided by the system before it is considered operational, testing controls should be implemented in the system. A few examples of testing controls include:
"a. Stringent administrative and physical controls setup to protect live/valid data during the testing cycle. b. Internal controls to protect the system against abuse or serious error during the testing cycle. c. Limited test files of raw (unprocessed) data, and limited test files of processed data for module and system testing setup. d. Recording/documenting of all activities (input, processing, errors, output) during the testing cycle."[7]
The best method for protecting a computer's hardware, software, and the date being processed by it is not the utilization of testing methods, but the implementation of a security procedure that will diminish the probability of an attack being easily launched against the system. The implementation of a separation of duties method allows for all tasks being performed to be subjected to strict scrutiny by an individual who was not involved in the process. After all, in the business world you will never see an individual who is responsible for preparing checks, be the same individual who is responsible for signing them.


[1] Cohen, Fred. All.Net. Security Database. Wiley and sons. 1995
obtained online at
[2]Cohen, Fred. Criminal Justice 625 Course Compact Disc, The University of New Haven, 1999. obtained online at
[3] Kluepfel, Hank. Countering Non-lethal Information Warfare: Lessons learned on foiling the information superhighway of the North American public switched telephone network. obtained online at
[4]Franklin, Diane. Keeping ahead of hackers: A thief only has to find one open window. obtained online at [5]Franklin, Diane. Keeping ahead of hackers: A thief only has to find one open window. obtained online at [6] Van Duyn, J. "The human factor in computer crime", Petrocelli books. Princeton, New Jersey. 1985. p.103
[7] Van Duyn, J. "The human factor in computer crime", Petrocelli books. Princeton, New Jersey. 1985. p.77