Critical Control 15: Controlled Access Based on the Need to Know

How do attackers exploit the absence of this control?

Some organizations do not carefully identify and separate their most sensitive data from less sensitive, publicly available information on their internal networks. In many environments, internal users have access to all or most of the information on the network. Once attackers have penetrated such a network, they can easily find and exfiltrate important information with little resistance. In several high-profile breaches over the past two years, attackers were able to gain access to sensitive data stored on the same servers with the same level of access as far less important data.

How to Implement, Automate, and Measure the Effectiveness of this Control

  1. Quick wins: Organizations should establish a multi-level data identification/classification scheme (e.g., a three- or four-tiered scheme with data separated into categories based on the impact of exposure of the data).
  2. Quick wins: Organizations should ensure that file shares have defined controls (such as Windows share access control lists) that specify at least that only "authenticated users" can access the share.
  3. Visibility/Attribution: Organizations should enforce detailed audit logging for access to nonpublic data and special authentication for sensitive data.
  4. Configuration/Hygiene: The network should be segmented based on the trust levels of the information stored on the servers. Whenever information flows over a network of lower trust level, the information should be encrypted.
  5. Configuration/Hygiene: The use of portable USB drives should either be limited or data should automatically be encrypted before it is written to a portable drive.
  6. Advanced: Host-based data loss prevention (DLP) should be used to enforce ACLs even when data is copied off a server. In most organizations, access to the data is controlled by ACLs that are implemented on the server. Once the data have been copied to a desktop system, the ACLs are no longer enforced and the users can send the data to whomever they want.
  7. Advanced: Deploy honeytokens on key servers to identify users who might be trying to access information that they should not access.
Associated NIST Special Publication 800-53, Revision 3, Priority 1 Controls
AC-1, AC-2 (b, c), AC-3 (4), AC-4, AC-6, MP-3, RA-2 (a)
Associated NSA Manageable Network Plan Milestones and Network Security Tasks
Milestone 3: Network Architecture
Milestone 5: User Access

Procedures and Tools to Implement and Automate this Control

It is important that an organization understand what its sensitive information is, where it resides, and who needs access to it. To derive sensitivity levels, organizations need to put together a list of the key types of data and the overall importance to the organization. This analysis would be used to create an overall data classification scheme for the organization. At a base level, a data classification scheme is broken down into two levels: public (unclassified) and private (classified). Once the private information has been identified, it can then be further subdivided based on the impact it would have to the organization if it were compromised.
Once the sensitivity of the data has been identified, it needs to be traced back to business applications and the physical servers that house those applications. The network then needs to be segmented so that systems of the same sensitivity level are on the same network and segmented from systems of different trust levels. If possible, firewalls need to control access to each segment. If data are flowing over a network of a lower trust level, encryption should be used.
Job requirements should be created for each user group to determine what information the group needs access to in order to perform its jobs. Based on the requirements, access should only be given to the segments or servers that are needed for each job function. Detailed logging should be turned on for all servers so that access can be tracked and situations where someone is accessing data that they should not be accessing can be examined.

Control 15 Metric:

The system must be capable of detecting all attempts by users to access files on local systems or network-accessible file shares without the appropriate privileges, and it must generate an alert or e-mail for administrative personnel within 24 hours. While the 24-hour timeframe represents the current metric to help organizations improve their state of security, in the future organizations should strive for even more rapid alerting, with notification about unauthorized access attempts sent within two minutes.

Control 15 Test:

To evaluate the implementation of Control 15 on a periodic basis, the evaluation team must create two test accounts each on 10 representative systems in the enterprise: five server machines and five client systems. For each system evaluated, one account must have limited privileges, while the other must have privileges necessary to create files on the systems. The evaluation team must then verify that the nonprivileged account is unable to access the files created for the other account on the system. The team must also verify that an alert or e-mail is generated based on the attempted unsuccessful access within 24 hours. At the completion of the test, these accounts must be removed.
Control 15 Sensors, Measurement, and Scoring
Sensor: Access controls lists
Measurement: Verify that ACLs are properly configured on all file shares. Additionally, verify that the ACLs restrict access to groups with the appropriate need to know. Tools such as ShareEnum or SoftPerfect's NetworkScanner can be used to identify shares and extract ACLs automatically.
Score: Automated tool scans for file shares and reports on ACLs inappropriately configured. Anonymous or public shares within a controlled environment result in a zero score: 100 points.
Sensor: Data loss prevention
Measurement: Tools such as McAfee Host DLP are deployed on all systems containing or having access to controlled data.
Score: Automated tool verifies that DLP software is installed and functioning.
Sensor: Honeytokens
Measurement: Honeytokens are powerful for the early detection of intentional and accidental data leakage. Honeytokens should be deployed on all sensitive data stores. The identity of the honeytokens must be kept closely guarded.
Score: Automated tool periodically verifies the existence of honeytokens on all file shares.
Sensor: Intrusion Detection Systems/Data loss prevention with honeytokens
Measurement: To be effective, it is necessary to monitor for the transmission of honeytokens.
Score: Automated tool verifies that IDS and DLP configurations are capable of detecting transmission or access to honeytokens.