CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Research Abstracts bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

 

Research Abstracts - 2006
horizontal line

horizontal line

vertical line
vertical line

Continuous Testing of Software During Development

David Saff, Arjun Dayal & Michael D. Ernst

The Problem: Wasted Time

If a developer has a regression test suite available during development or maintenance, but does not run it often, an opportunity is lost to catch regression errors early. The longer a regression error persists without being caught, the longer it may take to find the source of the error and correct the faulty code and any dependent code. [1] However, running the suite has a cost: remembering to run the tests, waiting for them to complete, and returning to the task at hand distract from development.

The Solution: Continuous Testing

Continuous testing runs regression tests in the background as a developer edits code, using excess cycles on a developer's workstation or nearby computers. By rapidly notifying developers of coding errors, continuous testing can improve both developer productivity and software quality. The developer is freed from deciding when to run tests, and errors are caught more quickly, especially those that the developer had no cause to suspect. Based on experimental and anecdotal evidence that continuous testing's users are more productive, we developed an implementation for Java, which has influenced version 4 of the JUnit unit testing framework and version 3.1 of the Eclipse development environment.

Implementation

Many modern development environments provide rapid feedback about compilation errors. This feature, called continuous compilation, is an inspiration for continuous testing. Our first implementation of continuous testing was used in a controlled experiment [2] comparing three groups of student developers: one provided with continuous testing, one provided only with continuous compilation, and one provided with no asynchronous notification of any type of development error. All participants used Emacs as their Java development environment.

Tool set used Completion rate
Continuous testing 78%
Continuous compilation only 50%
No extra tools 27%

Student developers using continuous testing were three times more likely than the control group to complete two different one-week programming assignments (which were part of their normal coursework). These statistically significant effects are due to continuous testing: they could not be explained by other incidental features of the experimental setup, such as time worked, regular testing, or differences in experience or tool preference.

Screenshot of continuous
  testing plug-in

Figure 1: When continuous testing detects a test failure, it alerts the user in the program source code and in the Problems window, like Eclipse does for compile errors.

Encouraged by the results, we built and released a of continuous testing as a plug-in [3] for the Eclipse integrated development environment (see Figure 1). Users can specify which test suite will run for each code project, and select from several schemes to automatically select and prioritize the suite. Proper prioritization can have a big impact on feedback speed, so developers with unusual test suites can also develop their own prioritization strategies and plug them in.

Industry Impact

The last year has seen ideas from continuous testing adopted by several highly-used software projects, some with direct support from continuous testing researchers. Eclipse 3.1 incorporated limited support for browsing previous test results and using those results to prioritize future test runs. Eclipse 3.2 will include a pluggable testing runtime based directly on continuous testing. Also, JUnit 4.0 now supports test prioritization and selection.

Future Challenges

Continuous testing has seen the most use among developers who are already committed to the idea of frequently running small, fast tests. We believe that transforming test suites through test factoring will further enhance the reach of continuous testing, making it applicable to long-running regression tests or those that require expensive resources or human intervention.

User experience with continuous testing has confirmed the value of its test result feedback, and encouraged us to investigate even more intuitive support for determining the causes of failures and navigating to broken code.

References

[1] David Saff and Michael D. Ernst. Reducing wasted development time via continuous testing. In Fourteenth International Symposium on Software Reliability Engineering (ISSRE 2003), pp. 281--292, Denver, CO. November 2003.

[2] David Saff and Michael D. Ernst. An experimental evaluation of continuous testing during development. In International Symposium on Software Testing and Analysis (ISSTA 2004), pp. 76--85, Boston, MA. July 2004.

[3] David Saff and Michael D. Ernst. Continuous testing in Eclipse. In 2nd Eclipse Technology Exchange Workshop (eTX), Barcelona, Spain. March 2004.

vertical line
vertical line
 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu