Workshops Tutorials Papers Keynotes Posters/SRC
Time Tuesday Wednesday Thursday Friday Saturday
May 31st June 1st June 2nd June 3rd June 4th
7:30am Breakfast (Foyer) Breakfast (Foyer) Breakfast (Foyer) Breakfast (Foyer) Breakfast (Foyer)
8:00am Charm++
(Santa Rita)
POET
(Rincon)
WOLFHPC
(Executive
Boardroom)
WEST (Coronado)
ROSS (Salon F)
MPI/SMPSs
(Coronado)
CACHES
(Salon E)
WHIST
(Salon D)
8:30am Opening Remarks
Keynote I
(Salon A)
Keynote III
Awards Presentation
(Salon A)
9:00am Keynote II
(Salon A)
9:30am
10:00am Break Break Break Break Break
10:30am Charm++
(Santa Rita)
POET
(Rincon)
WOLFHPC
(Executive
Boardroom)
WEST (Coronado)
ROSS (Salon F)
Papers 1
Best Paper
Candidates
(Salon A)
[Nvidia]
Papers 5
Performance and Resilience for
Solver Algorithms
(Salon A)
Papers 7
Programming Models
(Salon A)
MPI/SMPSs
(Coronado)
CACHES
(Salon E)
WHIST
(Salon D)
11:00am
11:30am
noon Lunch
(Flying V Restaurant)
Lunch
(Flying V
Restaurant)
[Microsoft]
SRC Posters
Presentation
(Salon C)
Lunch
(Flying V Restaurant)
[Isilon]
Lunch
(Flying V Restaurant)
[Intel/AMD]
Lunch
(Flying V Restaurant)
12:30pm
1:00pm
1:30pm Infiniband
and HSE

(Santa Rita)
WOLFHPC
(Executive
Boardroom)
WEST (Coronado)
ROSS (Salon F)
Papers 2
Transactional
Memory
(Salon A)
Papers 6
Model-based
Techniques
(Salon A)
SRC Finalist
Presentations
(Salon B)
Papers 8a
Accelerator-
Based
Mathematics
(Salon A)
Papers 8b
Caching
(Salon B)
MPI/SMPSs
(Coronado)
Optimization
and Tuning
(Santa Rita)
CACHES
(Salon E)
WHIST
(Salon D)
2:00pm
2:30pm
3:00pm Break Break Break Break
3:30pm Infiniband
and HSE

(Santa Rita)
WOLFHPC
(Executive
Boardroom)
WEST (Coronado)
ROSS (Salon F)
Papers 3a
Software
Tools
(Salon A)
Papers 3b
Non-Volatile
Memory
Systems
(Salon B)
Papers 9a
Applications
(Salon A)
Papers 9b
Innovative
Architecture
Solutions
(Salon B)
MPI/SMPSs
(Coronado)
Optimization
and Tuning
(Santa Rita)
CACHES
(Salon E)
WHIST
(Salon D)
4:00pm
4:30pm
5:00pm Papers 4a
Novel Hardware/
Software Approaches
(Salon A)
Papers 4b
Power
(Salon B)
5:30pm
6:00pm
6:30pm
7:00pm Poster Reception
(Salon C)
[AMD]
7:30pm
8:00pm
8:30pm
9:00pm
9:30pm
10:00pm
Time Tuesday Wednesday Thursday Friday Saturday
May 31st June 1st June 2nd June 3rd June 4th
Workshops Tutorials Papers Keynotes Posters/SRC

Keynote Address I

Rethinking Shared-Memory Languages and Hardware
Sarita Adve
University of Illinois Urbana-Champaign
(Slides)

The era of parallel computing for the masses is here, but writing correct parallel programs remains difficult. For many domains, shared-memory remains an attractive programming model. The memory model, which specifies the meaning of shared variables, is at the heart of this programming model. Unfortunately, it has involved a tradeoff between programmability and performance, and has arguably been one of the most challenging and contentious areas in both hardware architecture and programming language specification. Recent broad community-scale efforts have finally led to a convergence in this debate, with popular languages such as Java and C++ and most hardware vendors publishing compatible memory model specifications. Although this convergence is a dramatic improvement, it has exposed fundamental shortcomings in current popular languages and systems that thwart safe and efficient parallel computing.

I will discuss the path to the above convergence, the hard lessons learned, and their implications. A cornerstone of this convergence has been the view that the memory model should be a contract between the programmer and the system - if the programmer writes disciplined (data-race-free) programs, the system will provide high programmability (sequential consistency) and performance. I will discuss why this view is the best we can do with current popular languages, and why it is inadequate moving forward, requiring rethinking popular parallel languages and hardware. In particular, I will argue that (1) parallel languages should not only promote high-level disciplined models, but they should also enforce the discipline, and (2) for scalable and efficient performance, hardware should be co-designed to take advantage of and support such disciplined models. I will describe the Deterministic Parallel Java language and DeNovo hardware projects at Illinois as examples of such an approach.

This talk draws on collaborations with many colleagues over the last two decades on memory models (in particular, a CACM'10 paper with Hans-J. Boehm) and with faculty, researchers, and students from the DPJ and DeNovo projects.

Biography

Sarita Adve is Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests are in computer architecture and systems, parallel computing, and power and reliability-aware systems. Most recently, she co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models, and co-invented the concept of lifetime reliability aware processors and dynamic reliability management. She was named an ACM fellow in 2010, received the ACM SIGARCH Maurice Wilkes award in 2008, was named a University Scholar by the University of Illinois in 2004, and received an Alfred P. Sloan Research Fellowship in 1998. She serves on the boards of the Computing Research Association and ACM SIGARCH. She received the Ph.D. in Computer Science from Wisconsin in 1993.

Keynote Address II

Challenges and Opportunities in Renewable Energy and Energy Efficiency
Steve Hammond
National Renewable Energy Laboratory

The National Renewable Energy Laboratory (NREL) in Golden, Colorado is the nation's premier laboratory for renewable energy and energy efficiency research. In this talk we will give a brief overview of NREL and then focus on some of the challenges and opportunities in meeting future global energy challenges. Computational modeling, high performance computing, data management and visual informatics are playing a key roles in advancing our fundamental understanding of processes and systems at temporal and spatial scales that evade direct observation and in helping meet U.S. goals for energy efficiency and clean energy production. This discussion will include details of new, highly energy efficient buildings and social behaviors impacting energy use, fundamental understanding of plants and proteins leading to lower cost renewable fuels, novel computational chemistry approaches for low cost photovoltaic materials, and computational fluid dynamics challenges in simulating complex behaviors within and between large-scale deployment of wind farms and understanding their potential impacts to local and regional climate.

Biography

Steve is the director of Computational Science at the National Renewable Energy Laboratory (NREL) located in Golden, Colorado where he leads the laboratory efforts in high performance computing and energy efficient data centers. Prior to joining NREL in 2002, Steve spent ten years at the National Center for Atmospheric Research in Boulder, Colorado leading efforts to develop efficient massively parallel climate models. Before NCAR, Steve did Post Doctoral work at the European Center for Advanced Scientific Computing in Toulouse, France; was a Research Associate at the Research Institute for Advanced Computer Science, NASA Ames Research Center, Moffett Field, California; and a Computer Scientist at GE's Corporate Research and Development Center, in Schenectady, New York.

Keynote Address III

Performance Modeling as the Key to Extreme Scale Computing
William D. Gropp
University of Illinois at Urbana-Champaign

Parallel computing is primarily about achieving greater performance than is possible without using parallelism. Especially for the high-end, where systems cost tens to hundreds of millions of dollars, making the best use of these valuable and scarce systems is important. Yet few applications really understand how well they are performing with respect to the achievable performance on the system. The Blue Waters system, currently being installed at the University of Illinois, will offer sustained performance in excess of 1 PetaFLOPS for many applications. However, achieving this level of performance requires careful attention to many details, as this system has many features that must be used to get the best performance. To address this problem, the Blue Waters project is exploring the use of performance models that provide enough information to guide the development and tuning of applications, ranging from improving the performance of small loops to identifying the need for new algorithms. Using Blue Waters as an example of an extreme scale system, this talk will describe some of the challenges faced by applications at this scale, the role that performance modeling can play in preparing applications for extreme scale, and some ways in which performance modeling has guided performance enhancements for those applications.