![]()
|
Research
Abstracts - 2006 |
|
Cache-Oblivious Data StructuresErik D. DemaineWhyThe memory hierarchies of modern computers are becoming increasingly ``steep,'' ranging from fast registers and on-chip cache down to relatively slow disks and networks. Because the speed of processors is increasing more quickly than the speed of memory and disk, the disparity between the various levels of the memory hierarchy is growing. Today the access times of L1 cache and disk differ by approximately 6 orders of magnitude. Because of these large variations in magnitude, it is becoming increasingly dangerous to design algorithms assuming a ``flat'' memory with uniform access times. The distinguishing feature of multilevel memory hierarchies is that memory transfers are done in blocks in order to amortize the cost of a transfer. This amortization only works when each transfer contains many pieces of data to be used by the CPU. Thus, an important objective is to maintain locality of reference, meaning that memory accesses are clustered in time and space. Data locality is easier to maintain in algorithmic problems having regular and/or static data, because the flow of data is predictable. Examples of such problems include matrix multiplication, fast Fourier transform, and LU decomposition. Even in sorting, there is a fixed and predetermined order in which the data must end up. We focus on the more challenging problem of maintaining data locality in irregular and dynamic problems where by definition the data flow is continually changing and unpredictable, making it difficult to organize data locality a priori. Irregular and dynamic problems often require data-structure solutions because data structures specialize in moving data efficiently throughout memory. WhatA recent direction in the design of algorithms and data structures that are efficient on a memory hierarchy is the notion of cache-obliviousness, introduced by Prof. Leiserson's group in CSAIL in 1999 [FLPR99]. Cache-oblivious algorithms perform well on a multilevel memory hierarchy without knowing any parameters of the hierarchy, only knowing the existence of a hierarchy. More precisely, such algorithms assume that there is a cache of size M and with block size B, but are forbidden from actually exploiting the parameters B and M (except in the analysis). Equivalently, a single cache-oblivious algorithm is efficient on all memory hierarchies simultaneously. While such results might seem impossible, a recent body of work has developed cache-oblivious algorithms and data structures that perform as well or nearly as well as standard external-memory structures which require knowledge of the cache/memory size and block transfer size. ProgressIn collaboration with Stephen Alstrup (DIKU), Lars Arge (Duke), Michael Bender (Stony Brook), Richard Cole (NYU), Bryan Holland-Minkley (Duke), Martin Farach-Colton (Rutgers), Ian Munro (Waterloo), Theis Rauhe (DIKU), and Mikkel Thorup (AT&T), we have developed cache-oblivious solutions to several data structural problems:
FutureMany data structural problems remain to be explored in the cache-oblivious setting, and this remains an active area of research. Research SupportThis research was supported by NSF ITR grant EIA-0112849. References
|
![]() ![]() |
||
|