Event Abstract

The new landscape of parallel computer architecture

  • 1 NERSC Division, Lawrence Berkeley National Laboratory, United States

The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing.

Recent trends in the microprocessor industry have important ramifications for the design of the next generation of High Performance Computing (HPC) systems as we look beyond the petaflop scale. The need to switch to a geometric growth path in system concurrency is leading to reconsideration of interconnect design, memory balance, and I/O system design that will have dramatic consequences for the design of future HPC applications and algorithms. The required reengineering of existing application codes will likely be as dramatic as the migration from vector HPC systems to Massively Parallel Processors (MPPs) that occurred in the early 90’s. Such comprehensive code reengineering took nearly a decade, so there are serious concerns about undertaking yet another major transition in our software infrastructure.

This presentation explores the fundamental device constraints that have led to the recent stall in CPU clock frequencies. It examines whether multicore (or manycore) is in fact a reasonable response to the underlying constraints to future IC designs. Then it explores the ramifications of these changes in the context of computer architecture, system architecture, and programming models for future HPC systems. Finally, the talk examines the power-efficiency benefits of tailoring computer designs to the problem requirements. We show a design study of a purpose-built system for climate modelling that could achieve power efficiency and performance improvements hundreds of times better than following conventional industry trends. This same design approach could be applied to simulations of complex neuronal systems.

Conference: Neuroinformatics 2008, Stockholm, Sweden, 7 Sep - 9 Sep, 2008.

Presentation Type: Oral Presentation

Topic: Workshop

Citation: Shalf J (2008). The new landscape of parallel computer architecture. Front. Neuroinform. Conference Abstract: Neuroinformatics 2008. doi: 10.3389/conf.neuro.11.2008.01.151

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 28 Jul 2008; Published Online: 28 Jul 2008.

* Correspondence: John Shalf, NERSC Division, Lawrence Berkeley National Laboratory, Berkeley, United States, jshalf@lbl.gov