Critical Control 6: Application Software Security

How do attackers exploit the absence of this control?

Attacks against vulnerabilities in web-based and other application software have been a top priority for criminal organizations in recent years. Application software that does not properly check the size of user input, fails to sanitize user input by filtering out unneeded but potentially malicious character sequences, or does not initialize and clear variables properly could be vulnerable to remote compromise. Attackers can inject specific exploits, including buffer overflows, SQL injection attacks, cross-site scripting, cross-site request forgery, and click-jacking of code to gain control over vulnerable machines. In one attack, more than 1 million web servers were exploited and turned into infection engines for visitors to those sites using SQL injection. During that attack, trusted websites from state governments and other organizations compromised by attackers were used to infect hundreds of thousands of browsers that accessed those websites. Many more web and nonweb application vulnerabilities are discovered on a regular basis.
To avoid such attacks, both internally developed and third-party application software must be carefully tested to find security flaws. For third-party application software, enterprises should verify that vendors have conducted detailed security testing of their products. For in-house developed applications, enterprises must conduct such testing themselves or engage an outside firm to conduct it.

How to Implement, Automate, and Measure the Effectiveness of this Control

  1. Quick wins: Organizations should protect web applications by deploying web application firewalls that inspect all traffic flowing to the web application for common web application attacks, including but not limited to cross-site scripting, SQL injection, command injection, and directory traversal attacks. For applications that are not web-based, specific application firewalls should be deployed if such tools are available for the given application type. If the traffic is encrypted, the device should either sit behind the encryption or be capable of decrypting the traffic prior to analysis. If neither option is appropriate, a host-based web application firewall should be deployed.
  2. Visibility/Attribution: At a minimum, explicit error checking should be done for all input. Whenever a variable is created in source code, the size and type should be determined. When input is provided by the user it should be verified that it does not exceed the size or the data type of the memory location in which it is stored or moved in the future.
  3. Configuration/Hygiene: Organizations should test in-house-developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools. In particular, input validation and output encoding routines of application software should be carefully reviewed and tested.
  4. Configuration/Hygiene: Organizations should test in-house-developed and third-party-procured web applications for common security weaknesses using automated remote web application scanners prior to deployment, whenever updates are made to the application and on a regular recurring basis.
  5. Configuration/Hygiene: For applications that rely on a database, organizations should conduct a configuration review of both the operating system housing the database and the database software itself, checking settings to ensure that the database system has been hardened using standard hardening templates.
  6. Configuration/Hygiene: Organizations should verify that security considerations are taken into account throughout the requirement, design, implementation, testing, and other phases of the software development life cycle of all applications.
  7. Configuration/Hygiene: Organizations should ensure that all software development personnel receive training in writing secure code for their specific development environment.
  8. Configuration/Hygiene: Organizations should require that all in-house-developed software include white-list filtering capabilities for all data input and output associated with the system. These white lists should be configured to allow in or out only the types of data needed for the system, blocking other forms of data that are not required.
  9. Configuration/Hygiene: Sample scripts, libraries, components, compilers, or any other unnecessary code that is not being used by an application should be uninstalled or removed from the system.
Associated NIST Special Publication 800-53, Revision 3, Priority 1 Controls
CM-7, RA-5 (a, 1), SA-3, SA-4 (3), SA-8, SI-3, SI-10
Associated NSA Manageable Network Plan Milestones and Network Security Tasks
Milestone 3: Network Architecture
Milestone 7: Baseline Management
Security Gateways, Proxies, and Firewalls

Procedures and Tools to Implement and Automate this Control

Source code testing tools, web application security scanning tools, and object code testing tools have proven useful in securing application software, along with manual application security penetration testing by testers who have extensive programming knowledge and application penetration testing expertise. The Common Weakness Enumeration (CWE) initiative is used by many such tools to identify the weaknesses that they find. Organizations can also use CWE to determine which types of weaknesses they are most interested in addressing and removing. A broad community effort to identify the "Top 25 Most Dangerous Programming Errors" published free online by Mitre and the SANS Institute is also available as a minimum set of important issues to investigate and address during the application development process. When evaluating the effectiveness of testing for these weaknesses, Mitre's Common Attack Pattern Enumeration and Classification can be used to organize and record the breadth of the testing for the CWEs and to enable testers to think like attackers in their development of test cases.

Control 6 Metric:

The system must be capable of detecting and blocking an application-level software attack attempt, and must generate an alert or send e-mail to enterprise administrative personnel within 24 hours of detection and blocking.
All Internet-accessible web applications must be scanned on a weekly or daily basis, alerting or sending e-mail to administrative personnel within 24 hours of completing a scan. If a scan cannot be completed successfully, the system must alert or send e-mail to administrative personnel within one hour indicating that the scan has not completed successfully. Every 24 hours after that point, the system must alert or send e-mail about the status of uncompleted scans, until normal scanning resumes.
Additionally, all high-risk vulnerabilities in Internet-accessible web applications identified by web application vulnerability scanners, static analysis tools, and automated database configuration review tools must be mitigated (by either fixing the flaw or implementing a compensating control) within 15 days of discovery of the flaw.
While the 24-hour and one-hour timeframes represent the current metric to help organizations improve their state of security, in the future organizations should strive for even more rapid alerting, with notification about an application attack attempt sent within two minutes.

Control 6 Test:

To evaluate the implementation of Control 6 on a monthly basis, an evaluation team must use a web application vulnerability scanner to test for each type of flaw identified in the regularly updated list of the "25 Most Dangerous Programming Errors" by Mitre and the SANS Institute. The scanner must be configured to assess all of the organization's Internet-accessible web applications to identify such errors. The evaluation team must verify that the scan is detected within 24 hours and that an alert is generated.
In addition to the web application vulnerability scanner, the evaluation team must also run static code analysis tools and database configuration review tools against Internet-accessible applications to identify security flaws on a monthly basis.
The evaluation team must verify that all high-risk vulnerabilities identified by the automated vulnerability scanning tools or static code analysis tools have been remediated or addressed through a compensating control (such as a web application firewall) within 15 days of discovery.
The evaluation team must verify that application vulnerability scanning tools have successfully completed their regular scans for the previous 30 cycles of scanning by reviewing archived alerts and reports to ensure that the scan was completed. If a scan was not completed successfully, the system must alert or send e-mail to enterprise administrative personnel indicating what happened. If a scan could not be completed in that timeframe, the evaluation team must verify that an alert or e-mail was generated indicating that the scan did not finish.
Control 6 Sensors, Measurement, and Scoring
Sensor: Web Application Firewall (WAF)
Measurement: Verify that WAF is installed between applications and users. Products such as F5 Application Security Manager, ModSecurity, Art of Defence Hyperguard, and Trustwave WebDefend are recommended.
Score: Automated tool/process verifies: WAF is installed and functioning: 50 points. WAF configuration covers OWASP top 10: 20 points. WAF configuration defends against top 25 programming errors: 30 points.
Sensor: Web application firewall
Measurement: Central logging tool shows evidence that logs are being collected from WAF.
Score: Automated tool/process periodically verifies that WAF is generating logs into the security event manager or similar: 100 points. Failure to identify log entries = 0.
Sensor: Vulnerability/Configuration testing tools are running and reporting automatically. (Critical Control 10)
Measurement: Configuration and targets for vulnerability management tools used to satisfy Critical Control 10 appropriately configured to monitor application and application-base operating system configuration issues.
Score: Automated tool verifies that configuration and target list for Critical Control 10 includes application servers: 100 points. Failure to identify application server = 0.