Limit this search to....

Hierarchical Scheduling in Parallel and Cluster Systems 2003 Edition
Contributor(s): Dandamudi, Sivarama (Author)
ISBN: 0306477610     ISBN-13: 9780306477614
Publisher: Springer
OUR PRICE:   $161.49  
Product Type: Hardcover - Other Formats
Published: June 2003
Qty:
Annotation: Parallel job scheduling has been extensively studied over the last two decades. Initial focus of these studies has been on small UMA architectures. More recent interest is in the cluster systems. A job scheduling policy that works effectively for small UMA systems might not work for large distributed-memory systems with thousands of processors. Thus, scalability is an important characteristic of a scheduling policy if we want to use it in large distributed-memory systems. In this book, the author presents a hierarchical scheduling policy that scales well with system size. This policy is based on the hierarchical task queue organization he introduced to organize the system run queue.

The book is divided into four parts. Part I gives introduction to parallel and cluster systems. It also provides an overview of parallel job scheduling policies proposed in the literature. Part II gives details about the hierarchical task queue organization and its performance. The author shows that this organization scales well, which makes it suitable for systems with hundreds to thousands of processors. In Part III he uses this task queue organization as the basis to devise hierarchical scheduling policies for shared-memory and distributed-memory parallel systems as well as cluster systems. This part demonstrates that the hierarchical policy provides substantial performance advantages over other policies proposed in the literature. Finally, Part IV concludes the book with a brief summary and concluding remarks.

Additional Information
BISAC Categories:
- Computers | Machine Theory
- Computers | Hardware - General
- Computers | Operating Systems - General
Dewey: 004.015
LCCN: 2003047450
Series: Series in Computer Science
Physical Information: 0.86" H x 6.8" W x 8.72" (1.26 lbs) 251 pages
 
Descriptions, Reviews, Etc.
Publisher Description:
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph- ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro- cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro- cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass- ing to facilitate communication among the processors. As a result, they do not provide single address space.