Sunday, October 04, 2009

Independent Study: Static detection of security vulnerabilities in scripting languages

(This is one of a series of posts about papers I'm reading for an independent study with Prof. Evan Chang at the University of Colorado, Boulder. The format is similar to that of a review of a paper submitted to a computer science conference. There are already-published papers, so I'll be writing with the benefit of hindsight, especially when the paper was published at least several years ago.)

Submission: Static detection of security vulnerabilities in scripting languages [PDF]

Please give a brief, 2-3 sentence summary of the main ideas of this paper:

SQL injection and other string-based exploits to which web applications are vulnerable can be detected by performing static analysis on web applications written in dynamic languages. The static analysis is supplemented with information gleaned from the symbolic execution of the source code.
What is the strength of this paper (1-3 sentences):
Techniques for automatically detecting SQL injection attacks in web applications written in dynamic languages are sorely needed.
What is the weakness of this paper (1-3 sentences):
I am skeptical of the usefulness of the interactive mode of the checker -- which is triggered when regular expressions are used to validate unsafe data -- for the average PHP programmer. Also, while the authors refer to Perl's taint mode (man perlsec) as an alternative way of sanitizing data, it would be useful if they were to compare the effectiveness of their approach to Perl's built-in approach.
Symbolic execution has been used before in the DART paper, but there the purpose was to determine what input values would cause a statically-typed program to take certain paths during automated testing; here the purpose is to determine whether any memory locations are untrusted.
The authors describe a checker that is effective at detecting SQL injection vulnerabilities.
Worth Solving
This problem is worth solving. It is all too easy for programmers to fail to untaint input received from the user of a web application, so a reliable, automated way of detecting such exploits is necessary.
Reasonably confident
Detailed Comments
Analysis starts with block-level symbolic execution, which generates a block summary. Intraprocedural analysis takes block summaries (a six-tuple, described below) as input and generates a four-tuple, which is consumed by the intraprocedural analysis phase.

The use of symbolic execution here reminds me of the DART paper. Here symbolic execution is used to understand the functioning of a program written in a dynamic language. In the DART paper, it was used to force a statically typed language to take different paths during automated testing.

Block Analysis

At the block level, the code is executed symbolically, and the resulting summary is used to perform analysis at intra- and interprocedural levels. Using a summary at the higher levels expedites the analysis.

The authors define a language to model what they believe is an appropriate subset of PHP for detecting SQL injection attacks with their simulator (i.e. the component of their checker which symbolically executes blocks of PHP code).

They devote particular attention to how they model strings, because strings are such essential types in dynamic languages: "Strings are typically constructed through concatenation. For example, user inputs (via HTTP get and post methods) are often concatenated with a pre-constructed skeleton to form an SQL query.... String values are represented as an unordered concatenation of string segments, which can be one of the following: a string constant, the initial value of a memory location on entry to the current block (l_0), or a string that contains initial values of zero or more elements from a set of memory locations (contains(sigma))." The latter part of the definition of strings in this model allows the cheker to track the flow of untainted data through a web application.

The motivation for and definition of untaint (as related to the definition of the Boolean type) in the modelling language is unclear to me.

The untainting of strings "occur[s] via function calls, casting to safe types (e.g. int, etc), regular expression matching (!), and other types."

The result of the block-level analysis is a six-tuple consisting of an error set ("the set of input variables that must be sanitized before entering the current block"), definitions ("the set of memory locations defined in the current block"), value flow ("the set of pairs of [memory] locations (l_1, l_2) where the string value of l_1 on entry becomes a substring of l_2 on exit"), termination predicate (whether the current block causes the program to exit), return value (undefined if and only if the termination predicate is true), and an untaint set (the set of [memory] locations that are sanitized by the current block, for each of the block's successors).

Intraprocedural Analysis

This phase of the analysis uses the six-tuple block summaries generated by the previous phase to generate a four-tuple consisting of an error set ("the set of memory locations ... whose value may flow into a database query, and therefore must be sanitized before invoking the current function"), return set ("the set of parameters or global values that may be a substring of the return value" of the function), sanitized values ("the set of parameters or global variables that are sanitized on function exit"), and program exit ("whether the current function terminates program execution on all paths").

Interprocedural Analysis

This phase involves using the previously-generated function-level tuple to substitute actual for formal parameters in the error set and marking memory locations as safe when they are unconditionally untainted. It also involves the use of the Boolean-related notion of untaint that I still don't understand.

In what order are functions analyzed? "Our algorithm analyzes the source codebase in topological order based on the static function call graph." Recursion doesn't compute fixed-point; the system inserts a no-op summary when it encounters recursion.

Since regular expressions are self-contained automata, little computational devices, I was surprised when the authors remarked that strings can be marked as untainted when they are checked by regular expressions. It sounded almost magical. It's not quite that. "Some regular expressions match well-formed input while others detect malformed input; assuming one way or the other results in either false positives or false negatives.... To make it easy for the user to specify the sanitization effects of regular expressions, the checker has an interactive mode where the user is prompted when the analysis encounters a previously unseen regular expression and the user's answers are recorded for future reference."

The authors mention the built-in Perl taint mode (man perlsec). This suggests that the proper way of implementing a checker like the one described here is to integrate it into the language runtime.

No comments: