Printer Friendly

A preview of ISC'12.

2012 will be the year where more and more Supercomputer systems will be breaking the petaflops barrier, running real applications. It also means that the entry level for the 39th edition of the TOP500 List (Unpack) will be higher;, we imagine that there'll be at least 60 Teraflops required to enter the June list.

In the past, SCs were mostly used for running scientific applications, but a new wave is splashing over the SCs, Data Mining and Big Data Analysis. Clearly, scientific applications will be the main part running on SCs, but nowadays SCs are also used for knowledge discovery out of tremendous amounts of data (Petabytes), be it social data from Facebook or particle tracing on the LHC at CERN in Switzerland--just to name a few.

[ILLUSTRATION OMITTED]

A SHORT OVERVIEW OF SELECTED ISC'12 HIGHLIGHTS

HPC Architectures

The session "Heterogeneous Architectures and Beyond" is dedicated to the question, which SC-design is the favorite for future HPC systems? Five internationally well-known SC and chip manufacturers will present their design goals.

The Petascale area makes it also necessary to rethink processor developments, architectural design and applications. The session "Alternative Processors, Architectures & Multidisciplinary Applications" will highlight the three topics under the title with corresponding presentations.

And as already mentioned in the introduction for the new wave 'BIG DATA; additional new HPC Systems can be seen on the horizon. The session "Super-computer Architectures for Data Intensive Applications" will give Cray's CEO & President Peter Ungaro and legendary Steven Wallach from Convey the opportunity to present their ideas and concepts.

Memory Systems/Interconnects/File Systems

With the progress of Supercomputers into the Petaflops area, the need for extremely fast and available internal memory modules and external storage media (e.g. SSDs) has become greater than ever. The session "Large Memory Systems and Challenges" aims to clarify the two-fold problem (memory and back-end) and will show practical solutions from the industry and corresponding requirements from the application programmers.

It is a well-known fact that High Performance Computing can only be realized by using ultra-fast networks, especially inside each Supercomputer. This year, we have decided to split the networking topics and to organize two sessions for this very important feature, where Networking/Interconnects are seen on the one hand side from the Technology Products point of view and on the other hand from the Technology Limits.

And last but not least, the session "File Systems" will present Pros and Cons of various existing file systems. In addition, there will also be presentations about research in file systems required for extreme data of Petascale Data Intensive Computing.

Programming Tools for Scalable Application Development

How will parallel computing evolve in the coming years?

The session "Application Performance--Lessons learnt from Petascale Computation" will address the interaction of application performance and hardware design of modern multi-/many core based supercomputers. A thorough analysis of the issues involved in modern designs will guide the discussion of the requirements on application development for Exascale machines.

And to round off this theme, we have organized a panel, moderated by Jack Dongarra, with worldwide renowned software experts to discuss "Programming Models in the Years to Come."

It is obvious that, besides LINPACK, there are various SW packages to measure the performance of SCs i.e. TOP500, GRAPH500, and GREEN500. We have selected this topic for a session called "New Developments for Ranking Supercomputers." Especially LINPACK, the yardstick for the successful TOP500 lists for more than 17 years now, has to be adapted to the new Petascale situation, because the execution time of extremely large LINPACK runs is going into the 10th of hours. One presentation in this session also demonstrates that the above-mentioned performance packages have a strong impact on tracking the effects of Technology, Architecture and Energy on SCs.

HPC Applications

HPC applications have always played a dominant role at our conference, also at ISC'12.

Life science is becoming a strategic discipline at the frontier between molecular biology and computer science, impacting medicine, neurology and biotechnology, as well as society. The in-depth session "Life Sciences" will provide a forum for discussions of latest research in developing high-performance computing solutions to problems arising from Neural Networks and Molecular Dynamics Simulations.

The session Novel HPC Simulations in Research and Industry will present various applications from Cosmology, Aircraft Design and Material Science. For decades, all three fields were and are under the major drivers for advances in HPC.

ISC'12 is reflecting the hype about Big Data in a special session "Big Data-Analysis for Extreme Scale Research." Here, the attendees will find case studies from the LHC/CERN and Graph-Based approaches for Extreme Scale Data Analysis.

The session "Computational Chemistry" will focus on chemistry applications that are running on moderate many-core architectures up to Ab Initio Simulations on really very large HPC systems, i.e. the Blue Gene platforms.

Since the advent of the first "supercomputers", the simulation of atmosphere and oceans has played a mutually beneficial role in the development of more powerful computing capabilities. The session "Climate Research" will concentrate on climate modeling and coupled models for atmosphere/ocean simulations.

The session "Energy & HPC" concentrates on new Supercomputer applications in the enerqy sector. There will be presentations about Biomass-Simulations, Greener Coal Utilization and Renewable Energy Simulations.

The democratization of HPC is in full swing! Today, the benefits of numerical simulations and visualization based on HPC solutions are available to a much larger user audience due to the availability of adequate compute power for many applications at lower cost. The goal of the session "Are Small and Medium Enterprises (SMEs) Ready for HPC?" is to bring together the whole ecosystem around the "Middle Market" HPC (small to medium sized installations) and especially to educate users on what is available in terms of solutions, We also want to trigger dialogue between users, technology companies, hardware vendors, software vendors and service providers.

[ILLUSTRATION OMITTED]

By ISC Executive Consultant Dr. Horst Gietl
COPYRIGHT 2012 Advantage Business Media
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gietl, Horst
Publication:Scientific Computing
Date:Apr 1, 2012
Words:982
Previous Article:Normality, distribution types and goodness of fit: identifying suitable distributions: when investigating causes of non-normality, consider whether...
Next Article:Informatics: the glue to build enterprise knowledge: integrating platforms accelerates QbD continual improvement initiatives.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters