CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Program Partitioning for Compartmentalized Secure Execution

Charles W. O'Donnell, Marten van Dijk, G. Edward Suh & Srinivas Devadas

Introduction

With the increase in the complexity of applications and interconnectivity of computing systems, security concerns have become a first order issue. While security problems have been investigated for a long time, most solutions are programmatic in nature and do not require any change to the underlying platform an application is run on. Recently, a considerable amount of work has been done designing computing platforms which provide additional security functionality.

Architectures such as XOM [1] and AEGIS [2] offer applications protection from both software and physical attacks by introducing privileged secure execution modes. In particular, AEGIS provides a private and tamper-resistant environment which protects both the integrity and privacy of an application executing in this mode. However, these security environments have limited use when an application must be run entirely within a secure execution mode.

Therefore, it is important for a secure platform to offer the functionality to dynamically change between a secure mode of execution and an insecure mode of execution while an application is running. This provides a more flexible trust model. It reduces the amount of trusted code within secure applications (making verification easier), enables the use of libraries and APIs, and can improve the speed of execution. The AEGIS secure processor provides this mode switching with its “SSP” environment.

Ultimately, an architecture only provides an application the ability to switch to a secure execution environment. For an application to make use of episodic secure execution, every constituent program instruction must be associated with a particular mode before it is executed. It is possible for this association to be performed dynamically during program operation or statically at compile time. In either case, this assignment problem can be seen as a partitioning of the program instruction code into private black-box blocks and public white-box blocks.

Although the idea of partitioning program execution is not new, little work has been done investigating the absolute security of partitioned programs. While it may be easy to guarantee that private data cannot be accessed by public blocks, an attacker may circumvent the issue by emulating the private blocks without ever entering a secure execution mode (a man-in-the-middle attack).

The goal of this work is to formally clarify the adversarial complexity of a man-in-the-middle attack on partitioned programs. We will then use this to describe programmatic and architectural methods which allow program partitioning to be done in a secure manner.

Approach

We model a partitioned program as a collection of black-box private instruction blocks which are executed intermittently at the request of white-box public instruction blocks.

Under our first assumption, we consider an architecture which minimally constrains the interaction between the public blocks and the private blocks. We assume that an attacker (a program executing within the public blocks) can manipulate each individual private block independently of all other private blocks. The only guarantee is that each private block can hide its own internal data from the public blocks.

Since the internal data of a private block is concealed, the only information an attacker can extract from running the private block is its effect on public data. That is, the change in the values of public memory before and after execution, or the input-output pairs of every private block.

Given this model, we would like to describe the computational complexity required for an adversary to discover the underlying functionality of the private block. With this the adversary could correctly mimic the private block when new inputs are given to it. A weak security analysis would require this adversary to emulate all possible input-output pairs, while a stronger analysis would only require the adversary to emulate any input-output pair.

Under a different assumption, we can consider models which further constrain the interaction between public and private blocks. Here we assume that an attacker cannot control which private block will be executed next, creating a dependency between all of the private blocks within the program. This should dramatically increase the computational complexity faced by an attacker attempting to emulate a specific private block. In this model the input-output pairs are still present, however an attacker cannot choose which private block they will be applied to.

Once we understand the boundaries of what an adversary can and cannot compromise, we are in a position to use this to securely construct partitioned programs. This can be done in a user directed fashion, where programmer dened private blocks are checked during compile to ensure a certain security level. This can also be done automatically, where a compiler mechanically partitions a program and manipulates the instructions according to a parameterizable security template. However, a hybrid of these two is likely the most useful methodology.

Finally, existing architectures have been designed to allow program partitioning, however they have not necessarily been optimized for the particular security concerns of program partitioning. Architectural changes may exist which aid both security bounds as well as performance limitations.

Research Support

This work was funded by Acer Inc., Delta Electronics Inc., HP Corp., NTT Inc., Nokia Research Center, and Philips Research under the MIT Project Oxygen partnership.

References

[1] David Lie, Chandramohan Thekkath, Mark Mitchell, Patrick Lincoln, Dan Boneh, John Mitchell, and Mark Horowitz. Architectural Support for Copy and Tamper Resistant Software. In Proceedings of the 9th Int'l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX), pages 168-177, November 2000.

[2] G. Edward Suh, CharlesW. O'Donnell, Ishan Sachdev, and Srinivas Devadas. Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions. In Proceedings of the 32nd Annual Int'l Symposium on Computer Architecture, June 2005.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)