SC is the International Conference for
 High Performnance Computing, Networking, Storage and Analysis

• An Interview with Tech Program Chair, William Gropp• Overview• Papers• Tutorials• Masterworks• Panels• Posters• Workshops• Birds-of-a-Feather• Doctoral Showcase• Awards• Disruptive Technologies• SC09 Challenges• Thrust Areas• Plenary Speakers




Masterworks

Masterworks consists of invited presentations that highlight innovative ways of applying high performance computing, networking, and storage technologies to the world's most challenging problems. This year a star-studded lineup is in store for SC09 attendees.

One of the highlights of this year's Masterworks is a session that will focus on some of the world's largest, most complicated and most relentlessly demanding computer architectures. No Gordon Bell winners here and no Top500 - in fact, these architectures aren't even in a datacenter as we traditionally think of it. We refer here, of course, to Google and Facebook, two archetype models for service oriented or utility computing architectures that may have already turned the traditional notion of a datacenter on its head. Come to this session on Thursday at 1:30pm and find out why warehouse-scale machines and systems with 200 million users are, indeed, relevant to you and your work.
See High Performance at Massive Scale

This year Masterworks is also focusing on SC09 thrust areas, with two sessions on Sustainability and several on Bio-Computing.

Speaking of Sustainability, today, more than ever, solutions to our energy challenges rely on complicated modeling processes that require supercomputing. At SC09 Masterworks you'll see this in traditional areas such as seismic analysis for finding petroleum and reservoir fluid-flow modeling but you'll also learn how HPC is playing a vital role in scaling up wind power generation, in the kickoff session Tuesday morning that will cover everything from turbine design to wind farm siting and mesoscale weather models.
See Future Energy Enabled by HPC

And what will Exascale computing mean for global climate modeling? Two interrelated talks in a session on Thursday afternoon will explore both hardware and software aspects of this important topic and most importantly, how development of one is driving the other.
See Towards Climate Modeling in the ExaFlop Era

On the Bio-Computing front, we know that leading-edge high performance computing has already initiated a race to achieve a predictive, systems-level understanding of complex biological systems. Masterworks09 will present some of the most exciting results in this fascinating area. For example, a session on multi-scale biophysics on Wednesday at 3:30pm will explain how molecular simulation is helping to fight Alzheimer's Disease and answer a global "911" call by shaping a pharmacological strategy against the swine flu virus.
See Multiscale Simulations in Bioscience

How much does your doctor rely on supercomputing for your treatment? You might be surprised to know! A Masterworks session on Wednesday afternoon will cover HPC in Modern Medicine, examining transformative efforts in adapting Grid technology to healthcare as well as applying HPC-level simulation to surgical decisions.
See HPC in Modern Medicine

Like all fields, biocomputing has witnessed a virtual data explosion, with more instruments and faster assays pumping out ever-larger volumes of new kinds of information. How do we store, manage, share and analyze it? And how can we use these data to deduce how natural selection shaped who we are at the molecular level? A Masterworks session on Tuesday afternoon will reveal the answers.
See Data Challenges in Genome Analysis

The Finite Element Method has been an attractive choice for solving partial differential equations over complex domains such as cars and airplanes since the earliest days of HPC. Come to a Masterworks session on Tuesday afternoon to find out how researchers are now applying this time-tested methodology to studies of the human body, both microscopically for bones as well as on larger scales for entire skeletons and organs.
See Finite Elements and Your Body

Speaking of bones, scalable parallel solvers and algorithms are undoubtedly the "backbone" of all supercomputing studies. The final SC09 Masterworks session, Thursday afternoon, will highlight recent efforts in improving solver technology to advance research in two exciting areas: electrophysiology of the heart and flows in complex geometries as diverse as blood vessels and porous media, using the Lattice Boltzmann method on heterogeneous multicore CPUs, such as the IBM Cell, and massively parallel systems with thousands of cores.
See Scalable Algorithms and Applications

Join us for the best Masterworks program yet! (And don't forget about the room change on Thursday.)

Details of the SC09 Masterworks lineup are below.

See the Schedule for all SC Events

Questions: masterworks@info.supercomputing.org

Session: Future Energy Enabled by HPC



Tuesday November 17


10:30AM - 12:00PM
Room: PB253-254-257-258




HPC and the Challenge of Achieving a Twenty-fold Increase in Wind Energy
Steve M. Legensky (Intelligent Light)

Steve M. Legensky is the founder and general manager of Intelligent Light, a company that has delivered products and services based on visualization technology since 1984. He attended Stevens Institute of Technology (Hoboken, NJ) and received a BE in electrical engineering (1977) and an MS in mathematics (1979). While at Stevens, he helped to establish and then manage the NSF-funded Undergraduate Computer Graphics Facility, an innovative effort that incorporated interactive computer graphics into the engineering curriculum. Legensky then entered industry, working in real-time image generation for flight simulation. In 1984, he founded Intelligent Light, which has evolved from producing award-winning 3D animations to launching a set of 3D rendering and animation solutions, to the current FieldView™ product line of post-processing and visualization tools for computational fluid dynamics (CFD). Steve is an Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA) and has published and presented for AIAA, IEEE, ACM/SIGGRAPH and IDC.

Abstract: Wind power generation has made tremendous strides over the past twenty years. Progress in turbine geometries, electrical machinery and gearboxes have enabled machine size to grow from 50kW to 5mW. Site selection has also advanced, permitting wind farms to be located where strong, reliable winds can be expected. However, wind farms today are underperforming on predicted cost of energy by an average of 10% and operating expenses remain high. The government's Advanced Energy Initiative set a goal of meeting 20% of US electricity needs by 2030, a twenty-fold scale up from today's capability. Meeting this goal requires that performance problems are solved, requiring new tools to design machines and to locate them. Analysis and system-level optimization of the machines, wind farm location and configuration, coupled with accurate meso-micro scale weather modeling will need to be developed and validated. This unsteady, turbulent, multi-scale modeling will only be possible through the use of large scale HPC resources.

The Outlook for Energy: Enabled with Supercomputing
John Kuzan (ExxonMobil)

John began his career with ExxonMobil in 1990 as a reservoir engineer working at the Upstream Research Center in special core analysis, which means making fluid mechanics measurements on rock samples from petroleum reservoirs. He has held a variety of positions within ExxonMobil that include leading one of the teams that developed ExxonMobil's nextgeneration reservoir simulator -- known as EMpower -- and supervising various reservoir engineering sections. John also served as the Transition Manager for ExxonMobil's partnership with the Abu Dhabi National Oil Company in Zakum field. John is currently the Research Manager for reservoir modeling and has a portfolio that includes merging geoscience and engineering approaches to modeling. John is a Chemical Engineer by training and his PhD is from the University of Illinois, where his primary work was in turbulent fluid mechanics. He retired from the U.S. Army in 2002 with 17 years in the Reserve and 4 years of active duty. He held Company and Battalion command positions and spent a significant period at the Ballistic Research Laboratory working in supersonic flow and rocket dynamics.

Abstract: The presentation reviews ExxonMobil's global energy outlook through 2030. The projections indicate that, at that time, the world's population will be ~8 billion, roughly 25% higher than today. Along with this population rise will be continuing economic growth. This combination of population and economic growth will increase energy demand by over 50% versus 2000. As demand rises, the pace of technology improvement is likely to accelerate, reflecting the development and deployment of new technologies for obtaining energy--to include finding and producing oil and natural gas. Effective technology solutions to the energy challenges before us will naturally rely on modeling complicated processes and that in turn will lead to a strong need for super computing. Two examples of the supercomputing need in the oil business, seismic approaches for finding petroleum and petroleum reservoir fluid-flow modeling (also known as "reservoir simulation") will be discussed in the presentation.

Session: Data Challenges in Genome Analysis



Tuesday November 17


1:30PM - 3:00PM
Room: PB253-254-257-258




Big Data and Biology: The Implications of Petascale Science
Deepak Singh (Amazon)

Deepak Singh is a business development manager at Amazon Web Services (AWS) where he is works with customers interested in carrying out large scale computing, scientific research, and data analytics on Amazon EC2, which provides resizable compute capacity in the Amazon cloud. He is also actively involved in Public Data Sets on AWS, a program that provides analysts and researchers easy access to public data sets and computational resources. Prior to AWS, Deepak was a strategist at Rosetta Biosoftware, a business unit of Rosetta Inpharmatics, a subsidiary of Merck & Co. Deepak came to Rosetta Biosoftware from Accelrys where he was first the product manager for the life science simulation portfolio and subsequently Director of the Accelrys NanoBiology Initiative, an effort to investigate multiscale modeling technologies to model biological applications of nanosystems. He started his career as a scientific programmer at GeneFormatics, where was responsible for the maintenance and implementation of algorithms for protein structure and function prediction. He has a Ph.D. in Physical Chemistry from Syracuse University, where he used electronic structure theory and molecular dynamics simulations to study the structure and photophysics of retinal opsins.

Abstract: The past fifteen years have seen a rapid change in how we practice biology. High-throughput instruments and assays that give us access to new kinds of data, and allow us to probe for answers to a variety of questions, are revealing new insights into biological systems and a variety of diseases. These instruments also pump out very large volumes of data at very high throughputs making biology an increasingly data intensive science, fundamentally challenging our traditional approaches to storing, managing, sharing and analyzing data while maintaining a meaningful biological context. This talk will discuss some of the core needs and challenges of big data as genome centers and individual labs churn out data at ever increasing rates. We will also discuss how we can leverage new paradigms and trends in distributed computing infrastructure and utility models that allow us to manage and analyze big data at scale.

The Supercomputing Challenge to Decode the Evolution and Diversity of Our Genomes
David Haussler (University of California, Santa Cruz)

David Haussler develops new statistical and algorithmic methods to explore the molecular evolution of the human genome, integrating cross-species comparative and high-throughput genomics data to study gene structure, function, and regulation. He focuses on computational analysis and classification of DNA, RNA, and protein sequences. He leads the Genome Bioinformatics Group, which participates in the public consortium efforts to produce, assemble, and annotate the first mammalian genomes. His group designed and built the program that assembled the first working draft of the human genome sequence from information produced by sequencing centers worldwide and participated in the informatics associated with the finishing effort.

Abstract: Extraordinary advances in DNA sequencing technologies are producing improvements at a growth rate faster than the Moore's law exponential., The amount of data is likely to outstrip our ability to share and process it with standard computing infrastructure soon. By comparing genomes of present-day species we can computationally reconstruct most of the genome of the common ancestor of placental mammals from 100 million years ago. We can then deduce the genetic changes on the evolutionary path from that ancient species to humans and discover how natural selection shaped us at the molecular level. About 5% of our genome has remained surprisingly unchanged across millions of years, suggesting important function. There are also short segments that have undergone unusually rapid change in humans, such as a gene linked to brain development. It will be many years before we fully understand the biology of such examples but we relish the opportunity to peek at the molecular tinkering that transformed our animal ancestors into humans.

Session: Finite Elements and Your Body



Tuesday November 17


3:30PM - 5:00PM
Room: PB253-254-257-258




µ-Finite Element Analysis of Human Bone Structures
Peter Arbenz (ETH, Zürich)

Prof. Dr. Peter Arbenz has been a professor at the Institute of Computational Science / Institut fµr Computational Science at ETH, the Swiss Federal Institute of Technology, in Zurich since 2003. He has a Msc. in Mathematics and a PhD in Applied Mathematics from University of Zurich. From 1983 to 1987 he was a software engineer with BBC Brown, Boveri & Cie (now ABB) in Switzerland and in 1988 joined ETHZ as a senior researcher. His research interests are in Numerical Linear Algebra, High Performance Computing, Parallel and Distributed Computing and Computational Science & Engineering. He is a co-author of the book "Introduction to Parallel Computing - A practical guide with examples in C" (Oxford University Press, 2004).

Abstract: The investigation of the mechanical properties of trabecular bone presents a major challenge due to its high porosity and complex architecture, both of which vary substantially between anatomic sites and across individuals. A promising technique that takes bone microarchitecture into account is microstructural finite element (µ-FE) analysis. Hundreds of millions of finite elements are needed to accurately represent a human bone with its intricate microarchitecture; hence, the resulting µ-FE models possess a very large number of degrees of freedom, sometimes more than a billion. Detailed µ-FE models are obtained through high-resolution micro-computed tomography of trabecular bone specimens allowing nondestructive imaging of the trabecular microstructure with resolutions on the order of 80 micrometer in living patients. The discrete formulation is based on a standard voxel discretization for linear and nonlinear elasticity. We present highly adapted solvers that efficiently deal with the huge systems of equations on Cray XT5 as well as IBM Blue Gene.

Virtual Humans: Computer Models for Vehicle Crash Safety
Jesse Ruan (Ford Motor Company)

Dr. Jesse Ruan graduated from Wayne State University in 1994 with a Ph. D. in Biomedical Engineering. He has worked at Ford Motor Company for 19 years and published more than 60 scientific research papers in peer reviewed journal and conferences. He has been the speaker of numerous international conferences and forums in CAE simulation, impact biomechanics, and vehicular safety. He has completed the development of the Ford Full Human Body Finite Element Model. Recently, Dr. Ruan has been elected as a Fellow of American Institute for Medical and Biological Engineering. Dr. Ruan is the founding Editor-in-Chief of International Journal of Vehicle Safety. He is a member of American Society of Mechanical Engineers (ASME) and Society of Automotive Engineers (SAE). In addition to his employment with Ford Motor Company, Dr. Ruan is a Visiting Professor of Tianjin University of Science and Technology and South China University of Technology in China.

Abstract: With computational power growing exponentially, virtual human body models have become routine tools in scientific research and product development. The Ford Motor Company full human body Finite Element (FE) model is one of a few such virtual humans for vehicle crash safety research and injury analysis. This model simulates a 50th percentile male from head to toe, including skull, brain, a detailed skeleton, internal organs, soft tissues, flesh, and muscles. It has shown great promise in helping to understand vehicle impact injury problems and could help reduce dependence on cadaveric test studies. Human body finite element models can also be used to extrapolate results from experimental cadaver investigations to better understand injury mechanisms, validate injury tolerance levels, and to establish injury criteria. Furthermore, it is currently believed that assessing the effectiveness of injury mitigation technologies (such as safety belts, airbags, and pretensioners, etc.) would be done more efficiently with finite element models of the human body.

Session: HPC in Modern Medicine



Wednesday November 18


1:30PM - 3:00PM
Room: PB253-254-257-258




Grid Technology Transforming Healthcare
Jonathan Silverstein (University of Chicago)

Jonathan C. Silverstein, associate director of the Computation Institute of the University of Chicago and Argonne National Laboratory is associate professor of Surgery, Radiology, and The College, scientific director of the Chicago Biomedical Consortium, and president of the HealthGrid.US Alliance. He focuses on the integration of advanced computing and communication technologies into biomedicine, particularly applying Grid computing, and on the design, implementation, and evaluation of high-performance collaboration environments for anatomic education and surgery. He holds an M.D. from Washington University (St. Louis) and an M.S. from Harvard School of Public Health. He is a Fellow of the American College of Surgeons and a Fellow of the American College of Medical Informatics. Dr. Silverstein provides leadership in information technology initiatives intended to transform operations at the University of Chicago Medical Center and is informatics director for the University of Chicago's Clinical and Translational Science Award (CTSA) program. He has served on various national advisory panels and currently serves on the Board of Scientific Counselors for the Lister Hill Center of the NIH National Library of Medicine.

Abstract: Healthcare in the United States is a complex adaptive system. Individual rewards and aspirations drive behavior as each stakeholder interacts, self-organizes, learns, reacts, and adapts to one another. Having such a system for health is not inherently bad if incentives are appropriately aligned. However, clinical care, public health, education and research practices have evolved into such a state that we are failing to systematically deliver measurable quality at acceptable cost. From a systems level perspective, integration, interoperability, and secured access to biomedical data on a national scale and its creative re-use to drive better understanding and decision-making are promising paths to transformation of healthcare from a broken system into a high performing one. This session will survey HealthGrid issues and projects across clinical care, public health, education and research with particular focus on transformative efforts enabled by high performance computing.

Patient-specific Finite Element Modeling of Blood Flow and Vessel Wall Dynamics
Charles Taylor (Stanford University)

Charles A. Taylor received his B.S. degree in Mechanical Engineering in 1987 from Rensselaer Polytechnic Institute. He has an M.S. in Mechanical Engineering (1991) and in Mathematics (1992) from RPI. His Ph.D. in Applied Mechanics from Stanford (1996) was for finite element modeling of blood flow, co-advised by Professors Thomas Hughes (Mechanical Engineering) and Christopher Zarins (Surgery). He is currently Associate Professor in the Stanford Departments of Bioengineering and Surgery with courtesy appointments in the Departments of Mechanical Engineering and Radiology. Internationally recognized for development of computer modeling and imaging techniques for cardiovascular disease research, device design and surgery planning, his contributions include the first 3D simulations of blood flow in the abdominal aorta and the first simulations of blood flow in models created from medical images. He started the field of simulation-based medicine using CFD to predict outcomes of cardiovascular interventions and developed techniques to model blood flow in large, patient-specific models, with applications ranging in congenital heart malformations, hypertension and aneurysms. He received Young Investigator Awards in Computational Mechanics from the International Association for Computational Mechanics and from the U.S. Association for Computational Mechanics. He is a Fellow of the American Institute for Medical and Biological Engineering.

Abstract: Cardiovascular imaging methods, no matter how advanced, can provide data only about the present state and do not provide a means to predict the outcome of an intervention or evaluate alternate prospective therapies. We have developed a computational framework for computing blood flow in anatomically relevant vascular anatomies with unprecedented realism. This framework includes methods for (i) creating subject-specific models of the vascular system from medical imaging data, (ii) specifying boundary conditions to account for vasculature beyond the limits of imaging resolution, (iii) generating finite element meshes, (iv) assigning blood rheological and tissue mechanical properties, (v) simulating blood flow and vessel wall dynamics on parallel computers, and (vi) visualizing simulation results and extracting hemodynamic data. Such computational solutions offer an entirely new era in medicine whereby doctors utilize simulation-based methods, initialized with patient-specific data, to design improved treatments for individuals based on optimizing predicted outcomes. Supercomputers play an essential role in this process and new opportunities for high-performance computing in clinical medicine will be discussed.






Session: Multi-Scale Simulations in Bioscience



Wednesday November 18


3:30PM - 5:00PM
Room: PB253-254-257-258




Big Science < Computing Opportunities: Molecular Theory, Models and Simulation
Teresa Head-Gordon (University of California, Berkeley)

Teresa Head-Gordon received a B.S. in chemistry from Case Westerrn Reserve Institute of Technology in 1983. After a year of full-time waitressing she decided to expand her employment opportunities by obtaining a Ph.D. in theoretical chemistry from Carnegie Mellon University in 1989, developing simulation methods for macromolecules. In 1990 she became a Postdoctoral Member of Technical Staff at AT&T Bell Laboratories in Murray Hill, NJ working with chemical physicists (Frank Stillinger) and mathematical optimization experts (David Gay and Margaret Wright) on perturbation theories of liquids and protein structure prediction and folding. She joined Lawrence Berkeley National Laboratory as a staff scientist in 1992, and then became assistant professor in Bioengineering at UC Berkeley in 2001, rising to the rank of full professor in 2007. She has served as Editorial Advisory Board Member for the Journal of Physical Chemistry B (2009-present); for the Journal of Computational Chemistry (2004-present); and for the SIAM book series on Computational Science and Engineering (2004-2009), and panel member of the U.S. National Academies Study on Potential Impact of Advances in High-End Computing in Science and Engineering (2006-2008). She spent a most enjoyable year in the Chemistry department at Cambridge University as Schlumberger Professor in 2005-2006 interacting with scientists in the UK and the European continent, biking 13 miles a day to and from the Theory Center on Lensfield Road, and sharpening her croquet skills as often as possible in her backyard in a local Cambridge village. She remains passionate about the centrality of "applied theory", i.e. simulation, in the advancement of all physical sciences discoveries and insights in biology, chemistry, and physics.

Abstract: Molecular simulation is now an accepted and integral part of contemporary chemistry and chemical biology. The allure of molecular simulation is that most if not all relevant structural, kinetic, and thermodynamic observables of a (bio)chemical system can be calculated at one time, in the context of a molecular model that can provide insight, predictions and hypotheses that can stimulate the formulation of new experiments. This talk will describe the exciting opportunities for "big" science questions in chemistry and biology that can be answered with an appropriate theoretical framework and with the aid of high end capability computing*. *The Potential Impact of High-End Capability Computing on Four Illustrative Fields of Science and Engineering, National Research Council ISBN: 978-0-309-12485-0 (2008).

Fighting Swine Flu Through Computational Medicine
Klaus Schulten (University of Illinois at Urbana-Champaign)

Klaus Schulten received his Ph.D. from Harvard University in 1974. He is Swanlund Professor of Physics and is also affiliated with the Department of Chemistry as well as with the Center for Biophysics and Computational Biology. Professor Schulten is a full-time faculty member in the Beckman Institute and directs the Theoretical and Computational Biophysics Group. His professional interests are theoretical physics and theoretical biology. His current research focuses on the structure and function of supramolecular systems in the living cell, and on the development of non-equilibrium statistical mechanical descriptions and efficient computing tools for structural biology. Honors and awards: Award in Computational Biology 2008; Humboldt Award of the German Humboldt Foundation (2004); University of Illinois Scholar (1996); Fellow of the American Physical Society (1993); Nernst Prize of the Physical Chemistry Society of Germany (1981).

Abstract: The swine flu virus, spreading more and more rapidly, threatens to also become resistant against present forms of treatment. This lecture illustrates how a look through the "computational microscope" (CM) contributes to shaping a pharmacological strategy against a drug resistant swine flu. The CM is based on simulation software running efficiently on many thousands of processors, analyzes terabytes of data made available through GPU acceleration, and is adopted in the form of robust software by thousands of biomedical researchers worldwide. The swine flu case shows the CM at work: what type of computing demands arise, how software is actually deployed, and what insight emerges from atomic resolution CM images. In viewing, in chemical detail, the recent evolution of the swine flu virus against the binding of modern drugs, computing technology responds to a global 911 call.

Session: Towards Climate Modeling in the ExaFlop Era



Thursday November 19


10:30AM - 12:00PM
Room: PB252 --------NOTE ROOM CHANGE -------




Towards Climate Modeling in the ExaFlop Era
David Randall (Colorado State University)

David Randall is Professor of Atmospheric Science at Colorado State University. He received his Ph.D. in Atmospheric Sciences from UCLA in 1976. He has been developing global atmospheric models since 1972. He is currently the Director of a National Science Foundation Science and Technology Center on Multiscale Modeling of Atmospheric Processes, and also the Principal Investigator on a SciDAC project with the U.S. Department of Energy. He has been the Chief Editor of two different journals that deal with climate-modeling. He was a Coordinating Lead Author for the Intergovernmental Panel on Climate Change. He has Chaired several science teams. He received NASA's Medal for Distinguished Public Service, NASA's Medal for Exceptional Scientific Achievement, and the Meisinger Award of the American Meteorological Society. He is a Fellow of the American Geophysical Union, the American Association for the Advancement of Science, and the American Meteorological Society.

Abstract: Since its beginnings in the 1960s, climate modeling has always made use of the fastest machines available. The equations of fluid motion are solved from the global scale, 40,000 km, down to a truncation scale which has been on the order of a few hundred kilometers. The effects of smaller-scale processes have been "parameterized" using less-than-rigorous statistical theories. Recently, two radically new types of models have been demontsrated, both of which replace major elements of these parameterizations with methods based more closely on the basic physics. These new models have been made possible by advances in computing power. The current status of these models will be outlined, and a path to exaflop climate modeling will be sketched. Some of the physical and computational issues will be briefly summarized.

Green Flash: Exascale Computing for Ultra-High Resolution Climate Modeling
Michael Wehner (Lawrence Berkeley National Laboratory)

Michael Wehner is a staff scientist in the Scientific Computing Group in the Computational Research Division at Lawrence Berkeley National Laboratory. His current research focuses on a variety of aspects in the study of climate change. In particular, Wehner is interested in the behavior of extreme weather events in a changing climate, especially heat waves and tropical cyclones. He is also interested in detection and attribution of climate change and the improvement of climate models through better use of high performance computing. Before joining LBNL, Wehner worked as a physicist at Lawrence Livermore National Laboratory, where he was a member of the Program for Climate Modeling and Intercomparison, the Climate System Modeling Group and B Division. He is the author or co-author of 78 papers. Wehner earned his Master's degree and Ph.D. in nuclear engineering from the University of Wisconsin-Madison, and his Bachelor's degree in Physics from the University of Delaware.

Abstract: Since the first numerical weather experiments in the late 1940s by John Von Neumann, machine limitations have dictated to scientists how nature could be studied by computer simulation. As the processors used in traditional parallel computers have become ever more complex, electrical power demands are approaching hard limits dictated by cooling and power considerations. It is clear that the growth in scientific computing capabilities of the last few decades is not sustainable unless fundamentally new ideas are brought to bear. In this talk we discuss a radically new approach to high-performance computing (HPC) design via an application-driven hardware and software co-design which leverages design principles from the consumer electronics marketplace. The methodology we propose has the potential of significantly improving energy efficiency, reducing cost, and accelerating the development cycle of exascale systems for targeted applications. Our approach is motivated by a desire to simulate the Earth's atmosphere at kilometer scales.

Session: High Performance at Massive Scale



Thursday November 19


1:30PM - 3:00PM
Room: PB252 --------NOTE ROOM CHANGE -------




Warehouse-Scale Computers
Urs Hoelzle (Google)

Urs Hölzle served as Google's first vice president of engineering and led the development of Google's technical infrastructure. His current responsibilities include the design and operation of the servers, networks, and datacenters that power Google.

Abstract: The popularity of Internet-based applications is bringing increased attention to server-side computing, and in particular to the design of the large-scale computing systems that support successful service-based products. In this talk I will cover the issues involved in designing and programming this emerging class of very large (warehouse-scale) computing systems, including physical scale, software and hardware architecture, and energy efficiency.

High Performance at Massive Scale: Lessons Learned at Facebook
Robert Johnson (Facebook)

Robert Johnson is Director of Engineering at Facebook, where he leads the software development efforts to cost-effectively scale Facebook's infrastructure and optimize performance for its many millions of users. During his time with the company, the number of users has expanded by more than thirty-fold and Facebook now handles billions of page views a day. Robert was previously at ActiveVideo Networks where he led the distributed systems and set-top software development teams. He has worked in a wide variety of engineering roles from robotics to embedded systems to web software. He received a B.S. In Engineering and Applied Science from Caltech.

Abstract: Data at Facebook grows at an incredible rate with more than 500,000 people registering daily and 200,000,000 existing users increasingly adding new information. The rate of reads against the data also is growing, with more than 50,000,000 low-latency random accesses a second. Building a system to handle this is quite challenging, and I'll be sharing some of our lessons learned. One critical component of the system is the memcached cluster. I'll discuss ways we've maintained performance and reliability as the cluster has grown and techniques to accommodate massive amounts of network traffic. Some techniques are at the application layer, such as data clustering and access pattern analysis. Others are at the system level, such as dynamic network traffic throttling and modifications to the kernel's network stack. I'll also discuss PHP interpreter optimizations to reduce initialization cost with a large codebase and lessons learned scaling MySQL to tens of terabytes. I'll include measured data and discuss the impact of various optimizations on scaling and performance.

Session: Scalable Algorithms and Applications



Thursday November 19


3:30PM - 5:00PM
Room: Room: PB252 --------NOTE ROOM CHANGE -------




Scalable Parallel Solvers in Computational Electrocardiology
Luca Pavarino (Universita di Milano)

Luca F. Pavarino is a professor of Numerical Analysis at the University of Milano, Italy. He graduated from the Courant Institute of Mathematical Sciences, New York University, USA, in 1992. After spending two postdoctoral years at the Department of Computational and Applied Mathematics of Rice University, Houston, USA, he became assistant professor at the University of Pavia, Italy in 1994 and then professor at the University of Milano in 1998. His research activity has focused on domain decomposition methods for elliptic and parabolic partial differential equations discretized with finite or spectral elements, in particular on their construction, analysis and parallel implementation on distributed memory parallel computers. He has applied these parallel numerical methods to problems in computational fluid dynamics, structural mechanics, computational electrocardiology.

Abstract: Research on electrophysiology of the heart has progressed greatly in the last decades, producing a vast body of knowledge ranging from microscopic description of cellular membrane ion channels to macroscopic anisotropic propagation of excitation and repolarization fronts in the whole heart. Multiscale models have proven very useful in the study of these complex phenomena, progressively including more detailed features of each component in more models that couple parabolic systems of nonlinear reaction-diffusion equations with stiff systems of several ordinary differential equations. Numerical integration is, therefore, a challenging large-scale computation, requiring parallel solvers and distributed architectures. We review some recent advances in numerical parallel solution of these cardiac reaction-diffusion models, focusing, in particular, on scalability of domain decomposition iterative solvers that belong to the family of multilevel additive Schwarz preconditioners. Numerical results obtained with the PETSc library on Linux Clusters confirm the scalability and optimality of the proposed solvers, for large-scale simulations of a complete cardiac cycle.

Simulation and Animation of Complex Flows on 10,000 Processor Cores
Ulrich Rüde (University of Erlangen-Nuremberg)

Ulrich Ruede studied Mathematics and Computer Science at Technische Universitaet Muenchen (TUM) and The Florida State University. He holds a Ph.D. and Habilitation degrees from TUM. After graduation, he spent a year as Post Doc at the University of Colorado to work on parallel, adaptive finite element multigrid methods. He joined the Department of Mathematics at University of Augsburg in 1996 and since 1998 he has been heading the Chair for Simulation at the University of Erlangen-Nuremberg. His research interests are in computational science and engineering, including mathematical modeling, numerical analysis, multigrid methods, architecture-aware algorithms, visualization, and highly scalable methods for complex simulation tasks. He received the ISC Award 2006 for solving the largest finite element system and has been named a Fellow of the Society of Industrial and Applied Mathematics. Currently he also serves as the Editor-in-Chief for the SIAM J. Scientific Computing.

Abstract: The Lattice Boltzmann Method (LBM) is based on a discretization of the Boltzmann equation and results in a cellular automaton that is relatively easy to extend and is well suited for parallelization. In the past few years, the LBM method has been established as an alternative method in computational fluid dynamics. Here, we will discuss extensions of the LBM to compute flows in complex geometries such as blood vessels or porous media, fluid structure interaction problems with moving obstacles, particle laden flows, and flows with free surface. We will outline the implementation in the waLBerla software framework and will present speedup results for current parallel hardware. This includes heterogeneous multicore CPUs, such as the IBM Cell processor, and the implementation on massively parallel systems with thousands of processor cores.

   Sponsors    ACM    IEEE