Keeping IT Pure

By E. Eugene Schultz

Every aspect of an information system--from the network infrastructure down to bit sequences in data files--requires protection from unauthorized changes. Providing it is a huge task.

Most organizations in commercial, government, academic and other arenas are deeply concerned about the continued onslaught of security violations in networks and computing systems. These incidents are damaging not only in terms of data loss, embarrassment and legal repercussions but because of unauthorized changes in data and system integrity. The effort needed to detect and analyze these changes, to restore compromised systems to their normal mission status, and to verify that information and data values have not changed can cost more than all other losses associated with an incident.

Information security is commonly viewed in terms of the need for confidentiality, integrity and availability of systems and data. Other needs, such as protecting against unauthorized observation and possession of data, also should be considered. Yet not all needs are equally important; integrity is in many respects the most fundamental security consideration in today's computing environments. For example, in a significant compromise of system integrity, continued availability of corrupted services and data (as in life-critical systems) may be more detrimental than total unavailability, especially when manual completion of tasks normally performed by computing systems is an option. Similarly, motivation for protecting corrupted data from unauthorized observation and possession is at best questionable.

A system or network has integrity if all its components are free of unauthorized modification. Unauthorized modification of system and network software and data files is a constant security threat. In fact, the vast majority of today's information security-related incidents involve some kind of modification of system software to allow perpetrators to gain privileged access, create back doors into systems and protect unauthorized activity from discovery. Data integrity is similar in that it requires that no unauthorized changes to data files and packets have occurred.

It is worth noting that the need to maintain integrity goes beyond security. System hardware failures, media corruption, system bugs, errors in algorithms, human error and other problems can result in unintended changes in integrity. These sources of integrity compromise, especially human error, can lead to considerable loss and disruption, but they involve issues other than unauthorized human activity and malicious programs.

Integrity may superficially appear to be a global property that systems either have or do not. Although this view is correct to the degree that one change in a system is likely to affect the integrity of the entire system, integrity manifests itself in different ways. It is necessary to pay attention to the challenges that each type presents.

From Top to Bottom

Computing environments are subject to attacks on their integrity at all levels. Let's begin with systems and networks, which consist of a multitude of hardware components. Although many unauthorized modifications would result in system and/or network failure, some could produce changes in functionality and data. To cite only a few examples, a perpetrator could replace a hard drive with another that contains different executables and data, replace a motherboard that provides unintended functionality or make EPROM changes that cause a system to boot differently. Or "vampire taps" can be attached to network cabling, allowing unauthorized capture of network traffic. Because hardware integrity can be compromised in many ways, verifying it can be difficult and time-consuming.

At the operating system level, intruders frequently attempt to modify system binaries, because modification can yield useful information, execute commands that escalate an attack, and/or put the intruder in an environment in which it is easier to read data or change other system components. Binaries that are part of authentication processes are particularly useful in that they frequently require users to enter passwords and other authentication-related information. For example, perpetrators often modify the Unix login or in.telnetd programs, both of which prompt the user for a login name and password. Such modifications enable the intruder to steal passwords and other access information.

Other binaries in Unix systems are also frequent targets of modification attempts. The possibilities are seemingly endless, and the fact that so many system administrators do not regularly check the integrity of critical system binaries makes these files an easy target during an attack.

System configuration files are another frequent target of intruders. In Unix systems, /etc/passwd is a popular target because a minimum of changes in this file can quickly allow access to a root shell. The /etc/group file is also frequently targeted because group ownerships can allow read/write access to critical configuration files.

Intruders are increasingly targeting platforms other than Unix with unauthorized integrity changes to configuration and other files; the result can be catastrophic. The Windows NT Registry, for example, is becoming a more frequent target of attack, because the registry holds critical information used for authentication and assignment of rights and abilities. In particular, files within the HKEY_LOCAL_MACHINE directory (which contains a number of critical registry files) are more at risk than other Windows NT files. Changes to CONTROL.EXE can disrupt communications baud rates, and changes to SECEVENT.EVT and SECURITY.LOG can result in unauthorized modification to security audit data.

From Broad to Narrow

Unauthorized changes to systems are by no means the only integrity-related security threats. Attacking systems one-by-one is inefficient; attackers are increasingly focusing their efforts on network infrastructures. Once they have control of a network infrastructure, individual systems within and traffic passing through are easy prey.

Network infrastructure attacks often focus on key network components, such as routers, firewalls, bridges and Domain Name Service (DNS) servers. Unauthorized changes in routing rules, for example, can allow attackers to misdirect traffic to disrupt ongoing operations or capture information contained in packets. Attackers target firewalls to change access control lists or modify application proxies to eliminate access restrictions to hosts within the security perimeter enforced by the firewall.

In the most elementary sense, data integrity refers to preservation of the exact sequence of bits for a data file. In the case of ASCII and binary files, integrity is simple to conceptualize, and changes are generally easy to detect. A large portion of data in today's computing systems, however, is not stored in flat files but in relational databases and bit maps with sophisticated formats. Verifying data integrity is, therefore, in many instances a complex and demanding endeavor.

Application integrity ensures that applications are free of unauthorized changes. Although the majority of cases of computer crime reported over the years has not involved changes to specific applications, some cases have involved changes to financial applications that have resulted in major financial fraud or disruption. For example, perpetrators have made very small changes to routines that compute and assign interest to customer accounts. The result over a period of several years was the diversion of large amounts of money to other accounts from which the perpetrators have made withdrawals.

Malicious code can also alter application integrity. The Microsoft Word Winword (or Concept) macro virus, for example, is normally sent from one user to another in an attached Microsoft Word file. When a user opens the file, this virus activates and (among its many actions) modifies Word's Save As routine, thereby corrupting this application.

Application integrity often receives the least emphasis of all types of integrity, yet in many respects it is potentially the most disruptive, because applications often span many different platforms. Worse yet, most currently available applications do not have built-in integrity checking capability.

Interface Integrity

A broad picture of the problem of integrity in information security also dictates examining how information about the origin of communications, processes and other elements is preserved without unauthorized change. Interface integrity refers to the problem of origin integrity in systems and networks. One example is user interface integrity. Is the user who is trying to login to a system or use network services really the user s/he claims to be? Host interface identity is also a problem of user interface integrity. In IP spoofing, for example, a perpetrator fabricates packets that appear to originate from a certain host but which have originated instead from another, entirely different host.

In many respects interface integrity is the most fundamental type of information systems integrity. Consider, for example, the importance of user authentication as a security control for access to systems. Users must establish their identify before being allowed access to a system. Authentication controls are thus in many respects a special type of integrity control, and audit capabilities can also be considered a type of integrity checking tool in that they enable system and security administrators to verify whether each user login is legitimate.

Establishing and maintaining interface integrity, especially in network environments, are often difficult. Changing host identity information so that a host assumes the network address of another host is trivial in systems such as PCs and Macintoshes. Other platforms, such as Unix, usually are configured to allow general access to critical host identity and service definition files, such as the ifconfig, inetd, services and other files that for the most part should be available only to root users.

A major problem in establishing and maintaining interface integrity in networks is the Internet Protocol (IP) itself. IP services generally require little more information than host identity for authentication, yet masquerading as another host is relatively easy. The new generation of IP--IPv6--provides the capability to establish the integrity of host identity and other information contained in packets (thereby making attacks such as IP spoofing extremely difficult to accomplish); its imminent usage promises to improve interface integrity considerably.

Corporate Concerns

Awareness of integrity-related threats and the priorities assigned to dealing with integrity problems vary greatly among corporations. Most of industry appears to be more concerned with the threat of denial of service than loss of integrity, but more concerned with the threat of loss of integrity than loss of confidentiality. The strong concern about denial-of-service attacks may result from downtime incidents in which major financial loss occurred because of the unavailability of critical systems such as billing systems or because the media has sensationalized alleged extortion plots against banks based on the threat of denial of service.

Denial-of-service attacks can be devastating, but industry should focus more attention on the threat of loss of integrity than it currently does. Too often managers and business contingency planning experts overlook the cost of operating corrupted systems, running applications that have been modified without authorization and processing bad data. Also not to be overlooked when considering the threat of loss of integrity is the cost of restoring applications and data to the last known "state of goodness," as well as the potential for lawsuits, violation of law, loss of customers, jeopardy to human life and damage to reputation.

One particular sector within the commercial computing world--banking and securities--pays more attention to and controls integrity-related threats better than most of the rest of industry, for several reasons. As stated previously, small integrity changes in financial transaction systems can result in major financial loss in addition to other catastrophic consequences. Laws and regulatory agencies also have motivated the banking and securities sector to adopt and observe better security practices. As a whole, industry attempts to control integrity-related threats (and others) by establishing strong interface integrity, especially with respect to user and host identity. For this purpose, Kerberos and the Distributed Computing Environment (DCE) have been integrated into many banking and securities computing environments. This segment of industry also typically establishes a strong audit and oversight function over systems, applications and user access patterns. Other measures, such as mandatory system integrity checks by system administrators and user policies that lessen the likelihood of integrity compromise by users, often are parts of a complete approach to the problem of maintaining integrity.

Some corporations in other arenas do as well in approaching the problem of maintaining integrity as the banking and securities sector. More common, however, is an approach in which integrity in certain applications and systems is tightly controlled but in most other applications and systems is neglected. On the surface, this approach appears sound; the principle of business justification dictates that in business environments the cost of controls should not exceed the value of the assets to be protected. This principle is still appropriate for stand-alone environments, but it is questionable in most of today's networked environments, in which weak security links are likely to lead to the compromise of an entire network and all hosts therein. For example, one sniffer installed on one host on a network segment in which integrity needs are ignored is likely to compromise other hosts within that segment. All applications, systems and network components need at a minimum a baseline of integrity controls that make compromise of any one element difficult, so no element is the weak link in an expanding series of successful break-ins and unauthorized uses of services.

Maintaining Integrity

The major problem with controlling the threat of loss of integrity is that integrity is potentially much more transitory than most other information security needs and considerations (such as observation or possession of data). Any file on any system can, for example, be changed momentarily, then restored in another moment. Most integrity tools are not capable of detecting this type of change; most users and system administrators are not likely to notice, either. Worse yet, most reasonably effective and affordable tools examine system integrity only at a particular slice in time, then another, then another, to determine whether any changes have occurred between one point and another. Perpetrators can gauge the timing of attacks according to the integrity checking cycle.

Consider, for example, how Tripwire, one of the best tools for checking system integrity (available from the Computer Operations, Audit and Security Technology [COAST] Laboratory at Purdue University), can be used. Suppose that a system administrator runs Tripwire (which compares files with a known, previous state) on a system every Thursday afternoon. Although using Tripwire every week is a sound security practice, a perpetrator intent on compromising system integrity would be smart to change critical files on Thursday night, gain unauthorized access and capture critical information at will for six-and-one-half days, then restore the files again at noon the next Thursday. The report generated by Tripwire on Thursday afternoon would, in this instance, indicate that all is in order, even though the perpetrator may have captured hundreds of passwords by installing a Trojan Horse version of the login program and may also have temporarily added SUID-to-root scripts to allow backdoor access.

Although tools such as Tripwire are nevertheless quite useful, tools that detect transitory changes between fixed points in time would be more effective in detecting unauthorized changes in integrity. Because functionality such as this is not yet available in tools, integrity checking is often cumbersome to perform and manage.

In reality, many system administrators do too little to check the integrity of their systems, applications and data, and network administrators often neglect the integrity of key network components such as routers and firewalls. In most cases, companies that seek help investigating a security-related incident have few if any procedures and requirements for integrity checking in place. Regular integrity checking activities and the ability to repel or at least quickly detect and eradicate security-related incidents are closely related. Of course, system and network administrators often face the overwhelming job of having too much to administer in too little time. Still, commitment to perform some manageable level of basic, systematic integrity checking is an integral part of sound system and network administration practices.

Steps Toward Solutions

What then is a good approach for dealing with the problem of maintaining integrity? The first and most basic step is to create an information security policy that prescribes regular integrity checking and delineates appropriate responsibilities, either by developing a new one or amending an existing one. Creating technology-specific security practices that provide detailed requirements and procedures for maintaining integrity in specific platforms is the next logical move in the quest to maintain integrity.

Maintaining integrity requires appropriate technical knowledge and tools. One of the simplest ways to check for data integrity is to visually inspect a system for obvious signs of change, such as an unexplained last time of modification for files or the presence of a new, unfamiliar program in a temporary directory. In many incidents, casual observation of a small change to a file has been the first step in detecting a massive set of unauthorized integrity changes. Visual inspection is, however, too superficial to be used as the only approach to integrity maintenance. It also tends to be tedious and excessively time-consuming. Nevertheless, spending some time to display files and obtain listings to look for unexplained changes to files and systems is worthwhile.

A better approach is to use the diff command in Unix systems to compare the current version of a program with the original installation version, or to run a checksum program such as the Unix sum program, comparing checksums from the last time this program was run until now. One problem with simple checksums, however, is that a clever perpetrator can make changes to a file that will produce the same checksum as before. Although not perfect, crypto checksum programs are superior to checksum programs in that they can detect subtle changes to files that checksum programs can miss.

Public domain integrity checking tools that are reasonably effective are unfortunately scarce outside of the Unix arena; commercial data integrity checking tools (including tools that detect changes to firewalls and other specialized host machines) are in this case the only practical choice. Although intrusion detection tools are not normally viewed as integrity checking tools, viewing interface integrity as a legitimate type of integrity places these tools in this category. Furthermore, many of these tools examine suspicious changes in systems as possible indicators of an intrusion, which enhances their value to the integrity checking effort.

Integrity is more complex and diverse than might superficially be envisioned. As noted, it encompasses at least seven categories, ranging from hardware integrity to interface integrity. It is also in many respects the most fundamental of all security needs, in that loss of integrity often renders efforts to meet other needs moot.

Other security needs are important, it should go without saying. An effort to establish and maintain only integrity at the expense of protecting against unauthorized possession or denial of service is likely to lead to catastrophe in today's computing environments. An approach that balances implementation of integrity controls with controls designed to address other security needs is the right approach. Finding the right balance depends on business needs; some environments, such as financial computing environments, require a high degree of integrity, whereas others require less.

Establishing an integrity baseline throughout a corporate network is a key principle. Establishing an appropriate policy and a set of technical practices, as well as obtaining a suitable set of integrity-checking tools, are also essential. Finally, application integrity is too often neglected. To achieve more acceptable levels of application integrity, companies need to build integrity controls (including self-checks) directly into applications. Only through a rigorously planned and maintained program can an enterprise reasonably assume that it is doing enough to protect the integrity of its IT assets.

E. Eugene Schultz, Ph.D., is program manager for information security at SRI Consulting in Menlo Park, CA. He can be reached at gene_schultz@qm.sri.com.