CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Automatic Software Upgrades for Distributed Systems

Sameer Ajmani & Barbara Liskov

Introduction

Internet services face challenging and ever-changing requirements: huge quantities of data must be managed and made continuously available to rapidly growing client populations. Examples include online email services, search engines, persistent online games, scientific and financial data processing systems, content distribution networks, and file sharing networks.

The distributed systems that provide these services are large and long-lived and therefore will need changes (upgrades) to fix bugs, add features, and improve performance. Yet while a system is upgrading, it must continue to provide service to users. The aim of our research is to develop a flexible and generic automatic upgrade system that enables distributed systems to provide service during upgrades.

Approach

Our system is designed to satisfy a number of requirements. To begin with, upgrades must be easy to define. In particular, we want modularity : to define an upgrade, the upgrader must understand only a few versions of the system software, e.g., the current and new versions.

In addition, we require generality : an upgrade should be able to change the software in arbitrary ways. This implies that the new version can be incompatible with the old one: it can stop supporting legacy behavior and can change communication protocols. Generality is important because otherwise a system must continue to support legacy behavior, which complicates software and makes it less robust. Our approach allows legacy behavior to be supported as needed, but in a way that avoids complicating the current version and that makes it easy to retire the legacy behavior when the time comes.

A third point is that upgrades must be able to retain yet transform persistent state . Persistent state may need to be transformed in some application dependent way, e.g., to move to a new file format, and transformations can be costly, e.g., if the local file state is large. We do not attempt to preserve volatile state (e.g., open connections) because upgrades can be scheduled (see below) to minimize inconvenience to users of losing volatile state.

A fourth requirement is automatic deployment . The systems of interest are too large to upgrade manually (e.g., via remote login). Instead, upgrades must be deployed automatically: the upgrader defines an upgrade at a central location, and the upgrade system propagates and installs it on each node.

A fifth requirement is controlled deployment . The upgrader must be able to control when nodes upgrade. Reasons for controlled deployment include: allowing a system to provide service while an upgrade is happening, e.g., by upgrading replicas in a replicated system one-at-a-time (especially when the upgrade involves a time-consuming persistent state transform); testing an upgrade on a few nodes before installing it everywhere; and scheduling an upgrade to happen at times when the load on nodes being upgraded is light.

A sixth requirement is continuous service . Controlled deployment implies there can be long periods of time when the system is running in mixed mode, i.e., when some nodes have upgraded and others have not. Nonetheless, the system must provide service, even when the upgrade is incompatible. This implies the upgrade system must provide a way for nodes running different versions to interoperate, without restricting the kinds of changes an upgrade can make.

Progress

We have developed an upgrade infrastructure that supports these requirements. Ours is the first approach to provide a complete solution for automatic and controlled upgrades in distributed systems. It allows upgraders to define scheduling functions that control upgrade deployment, transform functions that control transforming persistent state, and simulation objects that enable the system to run in mixed mode. Our techniques are either entirely new, or are major extensions of what has been done before. We support all schedules used in real systems, and our support for mixed mode improves on what is done in practice.

Our support for mixed mode operation raises a question: what should happen when a node runs several versions at once, and different clients interact with the different versions? We address this question by defining requirements for upgrades and providing a way to specify upgrades that enables reasoning about whether the requirements are satisfied. The specification captures the meaning of executions in which different clients interact with different versions of an object and identifies when calls must fail due to irreconcilable incompatibilities. The upgrade requirements and specification technique are entirely new.

We have implemented a prototype, called Upstart, that automatically deploys upgrades on distributed systems. Results of experiments using Upstart show that our infrastructure introduces only modest overhead, and therefore our approach is practical.

Research Support

This research was support by the National Science Foundation under grant ANI-0082503 (http://project-iris.net) and also by Project Oxygen.

References

[1] Sameer Ajmani. Automatic Software Upgrades for Distributed Systems. MIT Ph.D Thesis. September, 2004.

[2] Eric A. Brewer. Lessons from Giant-Scale Services. IEEE Internet Computing. July, 2001.

[3] Marcin Solarski and Hein Meling. Towards Upgrading Actively Replicated Servers on-the-fly. Workshop on Dependable On-line Upgrading of Dist. Systems in conjunction with COMPSAC 2002, Oxford, England. August, 2002.

[4] Craig A. N. Soules et al. System Support for Online Reconfiguration. Proc. of the Usenix Technical Conference, 2003.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)