Fri Apr 8 06:51:40 PDT 2016

Content control: How is harmful and useless content controlled in my computing environments?


Option 1: Filtering technologies.
Option 2: Syntax checking.
Option 3: Redundancy for verification.
Option 4: Transformation technologies with attribution.
Option 5: Change control processes.
Option 6: Structural mechanisms.
Option 7: Microzoning and virtualization
Option 8: Counterintelligence methods.
Option 9: Verification and testing processes.
Option A: Use an outside vendor as a pass-through.
Option B: Place defenses in the logical perimeters.
Option C: Place defenses in the network.
Option D: Place defenses on the endpoints.
Option E: Use defenses proactively in separate systems.


"Yes" indicates that the technique should be applied, "?" indicates that it is optional, and no entry implies it should not be chosen over other methods. A, B, C, D, and E indicate the placement of those defenses.
Option Low Risk Med Risk High Risk
Filtering Yes [(A/B)D] ? [(A/B)D] [ABCD]
Syntax checking in context at inputs Yes [B] Yes [BD] Yes [ABCD]
Redundancy for verification ? [D] Yes [BD] Yes [BCD]
Transformations with attribution ? [D] Yes [AD] Yes [BCD]
Change control processes [AD] Yes [ABCDE] Yes [ABCDE]
Verification and testing processes [E] Yes [ABCDE] Yes [ABCDE]
Structural mechanisms and zoning [AB] Yes [BC]Yes [BCD]
Microzoning and virtualization ? [D] Yes [D]? [D]
Counterintelligence methods Yes [B] Yes [BCD] Yes [ABCD]
Controls against harmful and useless content
A: Use an outside vendor as a pass-through.
B: Place defenses in the logical perimeters.
C: Place defenses in the network.
D: Place defenses on the endpoints.
E: Use defenses proactively in separate systems.


Filtering technologies, such as scanning for viruses and spam emails (inbound and outbound) and scanning for known confidential data (a.k.a. data leakage prevention (DLP) - outbound) in low risk situations can be used for low quality protection when many false positives and false negatives are acceptable if these technologies reduce cost and user inconvenience. They are particularly suitable for low surety environments where the consequences of a protection failure are not severe, even in the aggregate. Filtering technologies can also be used in a tagged architecture for high surety content flow controls because the tagging is always present and readable and is associated with a body of content based on where it is/has been.

Syntax checking, including length, symbol sequence, bounds for the specific input in situation, and program state should be applied at every input to every program and should be explicitly mandated at every input involving another computer, program, or a human being, including within any program that interacts with a network. Syntax checking in the context of the program state is particularly effective at assuring that only valid inputs appear in each situation within a software process. This eliminates all of the methods commonly used to break out of the normal operation of software. Syntax checking is sometimes also used in outbound controls (e.g. to detect plaintext social security numbers as part of DLP) but is of limited utility. As a fundamental notion, in order to meet this condition, input checking as a function of state at each point where input could cause harm should be done and only known valid inputs should be allowed to pass. At a minimum such checks should include minimum and maximum input length and allowed symbols.

Redundancy for verification should be used to verify inputs with increasing amounts of redundancy used as the consequences or wrong inputs increase. For all entries associated with addresses or similar locations, postal codes should be verified against states and addresses, and address checks against names should also be applied where feasible. Redundant verification should also be used by creating more sensors and communications in higher consequence situations.

Transformation technologies with attribution, such as cryptographic checksums and certificates should be used to verify software and patches from commodity sources such as all software packages, disks, CDs, and patches from vendors. As surety levels increase, added verification processes should be used, and in medium and high surety environments, well-defined processes for acceptance of external software and hardware should be required. Transforms should also be used as part of the change control process to verify that alteration does not take place between inception and execution. This includes integrity shells, white listing, and other similar methods. Note that when transforms encrypt, they are problematic for other content controls such as filtering and syntax checking.

Change control processes are necessary in any medium or high consequence situation because they provide increased assurance that only authorized and properly tested changes take place. When used in combination with transforms to verify against unauthorized changes in operational systems, they form a testable basis for belief that the system operates as intended. Sound change control typically requires substantially more effort than simplistic approaches and it is therefore reserved for situations in which the risk warrants the costs. This should also be used in any software provided to others, and certainly for any widely distributed hardware or software. Change control also includes assuring that in production changes do not occur except through the change control process. This includes the use of mandatory or discretionary access controls, integrity shells, white listing, and other similar methods.

Verification and testing processes are necessary in any medium or high consequence situation because they provide increased assurance that only authorized and properly tested original mechanisms are in place. They should normally be used in conjunction with sound change control and/or transformation technologies with attribution to verify against unauthorized changes in operational systems. Hardware and software should be put though systematic verification and testing processes to assure that all identifiable input sequences or classes of input sequences result in proper states and outputs. While it is infeasible in practice to generate complete tests in many such systems, measures of coverage should be attained and applied to understand the extent to which surety has been attained.

Structural mechanisms such as network separation, digital diodes, one-way UDB channels, and network zoning approaches limit or eliminate the flow of information between different areas and thus limit the ability of unauthorized content to enter areas. This is a fundamental approach that should be applied with increasing surety as risks increase. For enterprise production environments, at least zoning and subzoning mechanisms should be use to limit inbound content and to limit interactions between business functions and their infrastructures.

Microzoning and virtualization mechanisms limit the flow and retention of undesired and useless content by keeping them within the microzones and, with non-state-retaining virtual machines (VMs) by destroying all content other than that explicitly retained at the end of the period of use. This is a good approach for limiting untrusted content, applications, and access over periods of use, at the cost of limited overhead.

Counterintelligence methods, such as removing email addresses from external Web sites, trying to reduce the profile of the enterprise, reduce the profile of important servers and services, not identifying potential targets, using anonymizing mechanisms for postings to external forums so that responses are limited and internal addresses and structure is not revealed, and so forth.

Use an outside vendor as a pass-through.
For things like spam filtering, antivirus, antispyware, and other similar content controls, outside vendors can often be a cost effective alternative to internal time, effort, and resources spent. In these situations, access for pulled content and paths for push content go through outside vendors who provide independent protection. This also creates external dependencies.

Place defenses in the logical perimeters.
This approach places defenses at the perimeter, typically in the DMZ for a layered architecture. It has the advantage of being relatively centralized and manageable while retaining control by the enterprise. But it also means that a lot of traffic and decisions may have to be made at the firewall that otherwise could be avoided. It typically involves creating proxy gateway mechanisms for most services allowed to pass in and out of the enterprise with filtering embedded in those mechanisms.

Place defenses in the network.
This approach places defenses throughout the network and turns the network into an enforcement mechanism at many or all levels. This increases the need for resources and adds technical management challenges but provides greater defense-in-depth. It typically means the use of intrusion and anomaly detection, internal firewalls, and other similar mechanisms.

Place defenses on the endpoints.
Mobile endpoints presumably have to protect themselves, and other endpoints may reasonably be expected to protect themselves regardless of what the rest of the environment does or does not do. This typically involves antivirus and antispyware on end devices that are susceptible, anti-spam in email clients, and other similar controls as identified herein depending on the specific requirements.

Copyright(c) Fred Cohen, 1988-2015 - All Rights Reserved