CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Test Factoring: Focusing Test Suites for the Task at Hand

David Saff, Shay Artzi, Jeff H. Perkins & Michael D. Ernst

Problem: Slow, Unfocused Tests

Frequent execution of a test suite during software maintenance can catch regression errors early and bolster the developer's confidence that steady progress is being made. However, if the test suite takes a long time to produce feedback, the developer is slowed down, and the benefit of frequent testing is reduced, whether the testing is manual (as in agile methodologies such as Extreme Programming) or automated (as in continuous testing).

In the ideal case, all of the time used in testing a changed code base would be devoted to exercising the changed code and its direct interactions with the rest of the system. Any time spent testing previously tested, unchanged parts of the code is wasted.

As an example, consider a developer enhancing an accounting application that performs financial calculations based on records retrieved from a third-party database server. A natural way to automatically test this application would be to insert test records into the database, run the application, and verify the correct result is returned. However, if the database is computationally expensive, and only the financial algorithms are being updated, not the database interaction, a majority of the time spent running such a test suite will be wasted on communication with the database server, which has not changed and the developer trusts to be deterministic.

How can automated tools be brought to bear to reduce this wasted testing time?

Solution: Test Factoring

We propose test factoring [1], an automatic method for generating focused, quick unit tests from general, slow tests. Each new unit test runs more quickly than the original, while testing less functionality than the original--perhaps exercising only a single component of the code. Test factoring occurs ahead of time, not at test time. It can use structural properties inferred from static analysis on the code base and tests, and dynamic information obtained by running an instrumented version of the original test.

A test is factored by applying one or more test factorings. We believe that test factorings can be cataloged, shared, and automated, just as code refactorings are. As an example, consider the test factoring Introduce Mock. Introduce Mock operates on a codebase that is divided into two different realms. The tested realm is code that is being changed, into which regression errors may be introduced. The mocked realm is code that is not changing, and should be simulated during testing to improve performance. The Introduce Mock procedure can be outlined as follows:

  1. Transformation: The code undergoes a semantics-preserving transformation to facilitate dynamic instrumentation. This transformation is accomplished using the Goral instrumentation framework, which separates the type hierarchy from the inheritance hierarchy in Java code, replacing all object references to concrete types with references to abstract interfaces.
  2. Trace capture: The original test is executed (we assume it passes), and traces are collected of calls from the tested realm into the mocked realm and vice versa.
  3. Mock code generation: The traces are analyzed and code generated for the mock objects, which will simulate the mocked realm in the final factored test.

An initial implementation of test factoring has been completed, and is currently being evaluated against programs with up to 300,000 lines of code. We are identifying opportunities for optimization of trace capture and the performance of the generated factored tests, and considering the most useful ways to report results to users.

References

[1] David Saff Test factoring: Focusing test suites on the test at hand. In ICSE'05, Proceedings of the 26th International Conference on Software Engineering, St. Louis, MO, USA, May 2005.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)