现在的位置: 首页 > 综合 > 正文

Secure Programming Training Notes

2013年04月12日 ⁄ 综合 ⁄ 共 11030字 ⁄ 字号 评论关闭

 

Introduction

Managing Project Risk

We manage theses risks by focusing on some key areas during development:

  • Providing the wrong functionality and/or performance levels (Requirements)
  • Providing high levels of defects (Quality)
  • Being late delivering on our commitments (Schedule)
  • Exceeding our budgets (Cost)

Traditionally, if we deliver on the Requirements, Quality, Schedule, and Cost, the project is generally considered a technical success

Managing the New Risks

We manage these security-related risks by focusing on some key things during development:

  • Information will be stolen (Confidentiality)
  • Information will be tampered with –destroyed or altered (Integrity)
  • Services will be denied or unavailable to authorized users (Availability)
  • Services will be provided to unauthorized users (Authentication)
  • System history will not accurately reflect what has happened (Non-repudiation)
  • Products will not meet legislative or regional requirements (Compliance)

Security areas of focus are often referred to with the acronym CIA or CIA++

AAA Model

Authorization (Access Control),

Authentication (includes Integrity),

Availability (Audit)

 

Vulnerabilities are flaws or weaknesses in the system/software.

Common Vulnerabilities

  • Unvalidated Input
  • Broken Access Control
  • Buffer Overflows
  • Improper Error Handling

Potential Countermeasures

  • Validate Input and Output
  • Fail Securely (Closed)
  • Keep it Simple
  • Use and Reuse Trusted Components
  • Defense in Depth
  • Least Privilege: provide only the privileges absolutely required
  • Compartmentalization (Separation of Privileges)
  • No homegrown encryption algorithms
  • Encryption of all communication must be possible
  • No transmission of passwords in plain text
  • Secure default configuration
  • Secure delivery
  • No back doors
  • Security By Obscurity Won’t Work

The application of countermeasures is driven by a variety of factors including risk levels, architectural decisions, engineering tradeoffs, etc.

Source: Open Web Application Security Project; www.owasp.org

Avoiding Buffer Overflows

Attacks are not limited to user input – any input may be exploited: multimedia files, network-level packets, RPC calls, etc.

Stack Overflows: function parameters, local variables, and where a program should return to after a function has completed its execution.

Typical Steps in Buffer Overflow Attacks

  1. Find a vulnerable piece of code

Time-consuming process which involves feeding any and all input parameters with a variety of malicious or garbage payloads.

With the source code available, they are easier to find.

  1. Find the number of bytes needed to overwrite or corrupt the return pointer

Incrementally add characters until a crash occurs, then analyze the crash characteristics for tell-tale signs of return address corruption.

  1. Create or obtain code to execute.

  2. Inject the code into the application (either into the same buffer, or another buffer in the application) and determine the address needed to trigger execution.

  3. Use that address to overwrite the return pointer via the buffer overflow vulnerability.

A buffer Overflow Attack


 

Malloc: If integer is greater than 32 bits, it only gets 0 passed to it, as the size to malloc. Note: malloc() may or may not fail if passed a 0 size, depending on the implementation.

 

An example of a ‘normal’ use-case:

Format String Vulnerabilities

Two command Format String Vulnerabilities:

  1. Format string are omitted in the function call (i.e. printf(foo) rather than printf(“%s”, foo))

This may inadvertently grant attackers the ability to inject format strings of their own construction, potentially allowing arbitrary values to be read from and / or written to memory.

  1. There is a mismatch between the number of parameters that are identified in the format string and the number of parameters that are provided in the remainder argument list

Without the expected parameters, the format specifiers  may cause the function to roam the stack in unexpected ways.

A Sample Format String Attack

Send a long series of %s specifiers (read non-existing arguments from the stack) until a protected memory address is accessed, potentially crashing the software.

Other Format String Attacks

The %n specifier can be used to store, into a non-existent argument, the number of bytes written to that point.

If the number of bytes is “stretched” (i.e. using %##u) using a specifier that allows a length to be defined, almost any custom value can be written into memory.

By walking the stack with %08x’s or %s, the target address can be reached.

Viewing values on the stack can be accomplished with %d (integers), %x (hexadecimals), and %u (unsigned integers)

Maintain Control (of the Format Strings)

Never letting users provide format strings is a critical factor in the mitigation of this class of vulnerability.

This includes any sort of formatted function: Screen Output, Logging Facilities, File Records, Formatted Input (i.e. via scanf()), etc.

Also recommended: Constant format strings.

Countermeasure

  • Check the lengths explicitly (source_size < destination_size)
  • Use a safer language. Alternatively, consider “safer” compiler variants like Safe-C, Vault, CCured or Cyclone
  • Avoid Unsafe Functions

General C/C++ functions: strcpy() and strcat(), sprintf() and vsprintf(), gets(), scanf(), fscanf(), sscanf(), vscanf(), vsscanf(), vfscanf()

The older snprintf() implementations: streadd(), strecpy(), strtrns()

Microsoft Library functions: wcscpy() and wcscat(), _tcscpy() and tcscpy(), _mbscpy() and _mbscat(), CopyMemory()

  • Use Safer Functions

strncpy() and strncat(): WARNING: You are still responsible for the ‘length’ parameter check AND the terminating null character, and be cautious about off-by-one errors.

Strlcpy() and strlcat(0 in OpenBSD: These functions are “heavier” than their simpler cousins

  • Use Safer Libraries

C:

Safe String Library: http://www.zork.org/safestr/

Libmib Allocated String Functions: http://www.mibsofware.com/libmib/astring/

Libsafe: http://www.research.avayalabs.com/project/libsafe/

C++:

The std::string class (Built-in)

STL (Standard Template Library)

Rogue Wave’s Standard C++ Library

  • Use Canaries

The StackGuard system introduced “canary” values ahead of return pointers. If the canary is changed and StackGuard detected that change, the program stops, rather than executing potentially malicious code. (MS compiler offers canaries via the /GS argument)

  • Use Static Analysis Tools

Static analysis tools such as FlowFinder, RATS, ITS4, and larger commercial tools from Fortify Software, Secure Code, and Klocwork

  • An Ineffective Countermeasure: No-Exec Stacks

Keeping Sensitive Data from Prying Eyes

The very act of “burying” or attempting to hide data can tip off attackers, making that information a prime target for investigation.

 

Defender:

1 Erase Values from Memory and Disk(Be aware of immutable data types, This feature makes it much difficult to eliminate stored values, Better to work with mutable types from the start.)

2 Use Memory Locks

3 Avoid Core Dump

Invoking setrlimit() with minimum and maximum values set to 0 will prevent memory dumpes on a crash.(Alternative: ulimit())

4 Avoid Serialization

…// stub to disable serialization tin Java

private final void writeObject (ObjectOutputStream out) throws java.io.IOException {

throw new java.io. IOException(“Object serialization denied.(Not supported)”);

}

5 Lock Down the Binary to Protect Memory

Reducing scope, Disallowing unintended extensions, and Being cautious with static/global variables are.

6 Objects—Compartmentalize

Every class, method and variables that is not private provides a potential entry for an attacker.[Java]

Finalize

Clones

7 Avoid Package Scope

Attackers can access package-private fields simply by adding their own class to the Jar file.

8 Avoid Inner Classes

Translated into bytecode, inner classes become accessible to any class in the package and the enclosing class’ private fields become non-private, permitting full access from the inner class.

9 Expose the Bare Minimum

10 Filter the Output

11 Logging Output

Logging too many of details create another administrative concern.

Summary

  • Be conscious of how your application might be exposed if memory is dumped to disk(intentionally or accidentally) or a flash drive(caching issues).
  • Be aware of exposures via cloning and serialization; clearly document your reasons for supporting these features.
  • When it comes to variable or function scope, being paranoid is the better way to go.
  • Avoiding logging absolutely everything; logs may be great for forensic and diagnostic purposes, but confidential material stored within may be leaked if the log is ever discovered or mishandled.

 

Failing Securely

Introduction to Error Handling Issues

Vulnerabilities: Denial-of-Service(DoS) attacks, Corrupted runtime state, Divulging sensitive information

Failure Open/Closed

The fall-back mode for unrecognized extensions is to treat it like any other HTML file; failing open.

Vulnerabilities: Canonicalization Errors, Double Encoding, “Homographic” Spoofing.

The support for encoded characters allowed attackers to bypass the authorization checks and access restricted resources.

Instead of limiting the conditions for success, the developers were attempting to counter negative cases as incidents arose, leaving their systems vulnerable in the interim.

Clean Up the Runtimes Environment

If an exception is to propagate back up the call stack, any file handles or database connections that were opened, new buffers allocated, changed security contexts, etc. should be tidied up before throwing the exception any further.

Leaving handles open or access levels changed can lead to resource starvation, or escalated privileges for the remainder of the session.

Handling Error Messages

Even a simple “Access Denied to <Resource>” message reveals to an attacker that a particular resource is present (or recognized), which both identifies the resources and potentially fingerprints the environment.

Error Message Mapping: Known error conditions can use error codes to map to two tables of error messages.

http://www.eventid.net

 

Do not try and guess what the user meant with their input – if it is not a sane value, deny the request and prompt them to try again.

Suggested Improvements

When success is determined by non-zero values, the difference between bit-values should be significant in order to reduce the possibility of “bit flipping” attacks.

If your code is built to handle each case, and includes a catch-all/default, be cautious with the use of hard-stops for unrecognized states.

Single Point of Exit?

In C (the exception-less), always check a function’s return code!

Alternatively: Libraries such as XXL(http://www.zork.org/xxl/xxl.html) provide robust exception handling functions.

For devices with processing limitations, excessively throwing exceptions might degrade performance to the point of service denial for users.

A boolean return value (or integer state code) would be more appropriate and less resource intensive (account retrieval and hashing already take up a number of cycles).

Caution: Assertion implementations can differ – some are hard stops, while others simply raise exceptions. Java assertions can be caught (AssertionErrors) while C and C++ asserts are generally hard-stops.

Summary

  • Failing Open helps the attackers while failing closed hurts trusted users. While this may have been a coding decision in the past, in the security context, this should be a design decision.
  • Managing and recovering from errors is a critical issue in software development. Equally critical is the need to avoid introducing vulnerabilities as a result of not cleaning up during the error handling process.
  • Only developers and administrators need detailed information in their error messages. Leave the users in the dark (but be polite about it).
  • Use hard-stops sparingly unless the design calls for immediate termination.
  • Remember to tighten up your code using techniques such as single point of exit, don’t misuse exceptions, check return codes, …

Establishing Trust Boundaries

Trust Boundaries are an imaginary or physical border over which data requests and responses travel, and at which the enforcement of policy occurs.

When to Authenticate?

Each time the flow of control (path of execution) or data crosses trust boundaries.

 

In general, use (and re-use) components that have been deemed “trustworthy” by your organization. Do not build Authentication modules on your own.

Summary

  • Design the system to be paranoid by default, and clearly defin which elements and resources need to be secured.
  • The general rule-of-thumb is: Do not trust information from external systems; the systems could be compromised and used maliciously against you.
  • Know your Trust Boundaries.
  • Implementing Trust Boundaries involves the use of Authentication, Authorization, Auditing, and Data Validation mechanisms.
  • Use trusted components.
  • Always run code with the Least Privileges possible.

抱歉!评论已关闭.