CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Research Abstracts bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line


Research Abstracts - 2006
horizontal line

horizontal line

vertical line
vertical line

Checking Structural Properties of Programs

Mana Taghdiri, Greg Dennis & Felix Chang


Robust components are essential to the construction of dependable software systems. Much progress has been made in the field of program verification, especially in the use of theorem proving, static analysis, and model checking. A major obstacle to verifying components against their specifications still remains, however. It seems that no existing analysis can automatically check the kinds of rich data structure properties that appear in the specifications of object-oriented procedures.

We have devised a practical technique for finding bugs in object-oriented procedures. It is capable of checking complex user-defined structural properties -- that is, of the configuration of the objects on the heap -- and generates counterexample traces with no false alarms. It is modular, fully automatic, and requires no annotations beyond the specification to be checked. Furthermore, the analysis is exhaustive with respect to some finite bounds. Thus, although it is guaranteed to find any counterexample trace that exists within the checked bounds, failure to find counterexamples does not constitute a proof. However, in practice many counterexamples are found in small bounds.


The method relies principally on a two-step translation: from code and specification to a formula in the first-order relational logic of Alloy [1], and then from Alloy to a boolean satisfiability (SAT) problem. An off-the-shelf SAT solver is then used to find a solution that constitutes a counterexample.

An earlier attempt to pursue this approach was embodied in the Jalloy [2] tool. While Jalloy succeeded in checking structural properties of Java code, it could not scale to large enough programs to make it useful in practice. Our current efforts to achieve scalability are divided into two distinct, but complementary efforts: (1) We are implementing a more compact encoding of object-oriented procedures in Alloy. This translation is embodied in a new tool called Forge [3]. (2) We are inferring specifications of called procedures to avoid translating the entire call graph of the procedure under analysis. This idea is implemented in a tool called Karun [4].

Unlike Jalloy, Forge reduces the generated relational formula to SAT not with the Alloy Analyzer, which was designed as a desktop CAD tool and not as a backend to other analysis tools, but with the new Kodkod relational engine, which was designed expressly for this kind of application. Compared to the Alloy Analyzer, Kodkod allows clients to indicate partial solutions to Alloy formulas and offers superior detection of shared sub-expressions and sub-formulas. Forge's encoding of variable and field assignment cleverly exploits the improved sharing detection, and it's handling of integer arithmetic makes extensive use of partial solutions.

Karun infers the specifications of called procedures automatically as needed. It is based on a technique known as "counterexample-guided abstraction refinement" [5]. It starts with an abstraction of the program in which each procedure call is replaced with a rough initial specification that over-approximates its effect. The specifications are then iteratively refined in response to spurious counterexamples. Therefore, only as much information about a procedure is used as is necessary to analyze its caller. Karun translates both the abstract program and the given property to a boolean formula using Forge, and thereby indirectly using Kodkod and a SAT Solver. New specifications are then inferred from a proof of unsatisfiability generated by the underlying SAT solver.


We have applied Forge to a variety of implementations of a list datatype. The starting point of the study was a pre-existing JML specification of the Java List class. Ten separate implementations of this specification were considered; five of these were variants into which defects had been seeded for an exercise on test coverage in an undergraduate software engineering class. The remaining five were the standard implementation from the Java Collections Framework, two implementations from the GNU Trove library, and two implementations from the Apache Jakarta Commons-Collections library. We implemented a mechanical translation of JML to Alloy and checked each of the 10 implementations for conformance to the JML specifications. The analysis scaled adequately for the task at hand, in all cases terminating within a few minutes. All of the seeded defects were revealed. A previously unknown bug was discovered in the latest version of the GNU Trove library, as well as a deliberate specification violation, and several bugs were detected in an earlier version of the library.

In other experiments, we used Karun to check partial specifications of some procedures containing several nested procedure calls. We compared our analysis results with another technique that uses the same backend and the same translation method, but inlines all procedure calls. Current experiments show that the procedure abstraction technique considerably improves the analysis time by (1) reducing the translation time, and (2) generating smaller boolean formulas that can be checked faster. Detailed description of this work can be found elsewhere[4].

Research Support

This research is supported by the Design Conformant Software grant from the ITR program of the National Science Foundation.


[1] I. Shlyakhter. Declarative symbolic pure logic model checking. MIT PhD thesis, Cambridge, MA, February 2005.

[2] D. Jackson and M. Vaziri. Finding bugs with a constraint solver. In International Symposium on Software Testing and Analysis (ISSTA), Portland, OR, 2000.

[3] G. Dennis and F. Chang and D. Jackson. A Cast Study in Modular Verification, Submitted for publication. 2006

[4] M. Taghdiri. Inferring Specifications to Detect Errors in Code. In proc. of the International Conference on Automated Software Engineering (ASE'04), pp. 144--153, Austria, September 2004.

[5] E. Clarke and O. Grumberg and S. Jha and Y. Lu and H. Veith. Counterexample-guided Abstraction Refinement. In Proc. of Computer Aided Verification, pp. 154--169, 2000

vertical line
vertical line
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu