Visit additional Tabor Communication Publications
November 24, 2010
Conference sets tech program attendee record
NEW ORLEANS, Nov. 24 -- SC10, the international conference for high performance computing, networking, storage and analysis, concluded Friday, Nov. 19, following the recognition of achievements by members of the supercomputing community.
Among the honors presented during the conference were the IEEE Computer Society Seymour Cray Award, the IEEE Computer Society Sidney Fernbach Memorial Award, the ACM/IEEE Computer Society Ken Kennedy Award, ACM Gordon Bell Prizes, ACM/IEEE Computer Society George Michael Memorial HPC Ph.D. Fellowship Award, several competitive challenges, best paper and best poster awards.
Held at the Ernest N. Morial Convention Center in New Orleans, this year's conference set an all-time record for participation in the technical program with 4,390 attendees. Overall attendance broke the 10,000 mark. In addition, 338 exhibitors leveraged the conference's 386,000 square feet of exhibit hall space in the convention center. The successful event marked the 23rd anniversary of the SC conference series.
"At a time of momentous change in supercomputing, the conference was once again the focal point of the global high performance computing community," said Barry Hess, SC10 general chair and deputy CIO for Sandia National Laboratories. "The volunteers who make SC possible put together a conference, from the record-setting technical program and special sessions to the floor exhibits, that was second to none in HPC. My special thanks to all those who contributed to SC10 and my congratulations to those honored for their outstanding achievements and contributions to the community."
The following individuals and organizations were recognized with awards:
James W. Demmel, a professor of mathematics and computer science at the University of California, Berkeley and researcher at Lawrence Berkeley National Laboratory, received the 2010 IEEE Computer Society's Sidney Fernbach Award for computational science leadership in creating adaptive, innovative, high-performance linear algebra software." Read more at http://www.lbl.gov/cs/Archive/news093010b.html.
Alan Gara, chief system architect for the three generations of Blue Gene supercomputers, was awarded the IEEE Computer Society's 2010 Seymour Cray Award for his "innovations in low power, densely packaged supercomputing systems." Read more at http://www.computer.org/portal/web/pressroom/20101004cray.
David Kuck, an Intel Fellow, received the second annual ACM-IEEE Computer Society's Ken Kennedy Award for advances to compiler technology and parallel computing that have improved the cost-effectiveness of multiprocessor computing. The Kennedy Award also cited him for the widespread inspiration of his teaching and mentoring. Read more at http://www.acm.org/press-room/news-releases/2010/kennedy-award-2010/view.
ACM/IEEE George Michael Memorial PhD Fellowship Award winners:
The ACM, IEEE Computer Society and SC Conference series established the George Michael HPC Ph.D. Fellowship Program to honor exceptional Ph.D. students throughout the world. Fellowship recipients are selected based on their overall potential for research excellence, the degree to which their technical interests align with those of the HPC community, their academic progress to date and demonstration of their anticipated use of HPC resources.
Gordon Bell Prize winners:
The Gordon Bell Prize is awarded each year to recognize outstanding achievement in HPC. Now administered by the Association of Computing Machinery (ACM), financial support of the $10,000 award is provided by Gordon Bell, a pioneer in high performance and parallel computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying HPC to applications in science.
Best performance: "Petascale Direct Numerical Simulation of Blood Flow on 200K Cores and Heterogeneous Architectures," Abtin Rahimian, Ilya Lashuk, Shravan Veerapaneni, Aparna Chandramowlishwaran, Dhairya Malhotra, Logan Moon, Rahul Sampath, Aashay Shringarpure, Jeffrey Vetter, Richard Vuduc, Denis Zorin, George Biros.
Honorable mention performance: "Toward First Principles Electronic Structure Simulations of Excited States and Strong Correlations in Nano- and Materials Science," Anton Kozhevnikov, Adolfo G. Eguiluz, Thomas C. Schulthess. A second honorable mentionwent to "190 TFlops Astrophysical N-body Simulation on a Cluster of GPUs," Tsuyoshi Hamada, Keigo Nitadori.
Best Technical Paper:
Best Student Paper:
Best Research Poster:
Best Student Posters:
First place: "Scale and Concurrency of GIGA+: File System Directories with Millions of Files" by Swapnil Patil, CMU.
Second place: "Optimizing End-to-End Performance of Scientific Workflows in Distributed Environments" by Yi Gu, University of Memphis.
Third place: "An Efficient Algorithm for Obtaining Low Memory Approximation Models of Large-Scale Networks" by Kanimathi Duraisamy, University of Nebraska – Omaha.
First place: "Parallelized Hartree-Fock Code for Scalable Structural and Electronic Simulation of Large Nanoscale Molecules" by David C. Goode, Harvard University.
Second place: "An Integration of Dynamic MPI Formal Verification Within Eclipse PTP" by Alan P. Humphrey, University of Utah.
Third place: "Finding Tropical Cyclones on Clouds" by Daren J Hasenkamp, Lawrence Berkeley National Lab.
The Storage Challenge is a competition showcasing applications and environments that effectively use the storage subsystem in high performance computing, which is often a limiting system component. Judging is based on these measurements as well as innovation and effectiveness.
2010 Winner: "Scaling Highly-Parallel Data-Intensive Supercomputing Applications on a Parallel Clustered File system" Karan Gupta, Reshu Jain, Himabindu Pucha, Prasenjit Sarkar, Dinesh Subhraveti, IBM Almaden Research Center.
Student Cluster Competition (SCC)
The Student Cluster Competition (SCC) showcases next-generation high-performance computing talent harnessing the incredible power of current-generation cluster hardware. In this real-time challenge, teams of six undergraduate and/or high school students assemble a small cluster of their own design on the SC exhibit floor and race to correctly complete the greatest number of application runs during the competition period. The catch is the teams must run real HPC workloads on the same power needed to run only three coffee makers -- 26 Amps! During the competition, teams were judged on the speed of the HPCC benchmarks, the throughput and accuracy of applications runs, and ability to impress SC participants and judges during the conference.
Out of a field of eight teams from around the world, the Overall Winner of the 4th SCC is National Tsing Hua University from Taiwan, which partnered with ACER Incorporated, Tatung Company, and NCHC. NTHU won with the highest aggregate points in the HPCC benchmark, throughput and correctness of real-world applications, and interviews.
The winner of the SCC Highest LINPACK was the University of Texas at Austin partnered with Dell and the Texas Advanced Computer Center, exceeding 1 TeraFLOP for the first time ever in the Student Cluster Competition while staying below the 26 Amp constrained power budget.
SC10, sponsored by the ACM (Association for Computing Machinery) and the IEEE Computer Society Technical Committee on Scalable Computing and the IEEE Computer Society Technical Committee on Computer Architecture, showcased how high performance computing, networking, storage and analysis lead to advances in research, education and commerce. This premiere international conference included technical and education programs, workshops, tutorials, an exhibit area, demonstrations and hands-on learning. For more information, visit http://sc10.supercomputing.org/.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.