Visit additional Tabor Communication Publications
November 11, 2008
Computing at Scale
Wednesday morning starts off looking at one of the most talked about visions of the HPC future to come on the scene in the recent past: Parallel Computing Landscape: A View from Berkeley. David Patterson, one of the report's principal authors, discusses A View in this invited talk.
The Programming Models papers session looks at the role of MPI in application development time, an adaptive cut-off for task parallel frameworks, and studies the software environment on the Intel 80-core terascale processor.
Finally, the Doctoral Research Showcase highlights the work of Chao Wang in developing a mechanism for process-level fault tolerance for job healing in HPC, a topic relevant not only to tomorrow's very large scale computers, but also to the creation of a more robust computational support infrastructure.
Wednesday's activities related to the computational infrastructure theme start off with a panel discussion. The provocatively named Will Electric Utilities Give Away Supercomputers with the Purchase of a Power Contract? panel explores the crunch datacenter budgets are feeling right now as falling computer prices push their power distribution systems to their limits.
Chao Wang's paper on process-level fault tolerance, discussed above, also points to key new technologies of interest for future computational infrastructure developments.
Wednesday's application horizons-themed activities focus on opportunities for HPC in medicine and biology. First up is invited speaker Kenneth H. Buetow from the National Cancer Institute with discussion of the issues involved in developing a framework that will enable personalized medicine. A framework in this application area will involve bringing together many different communities and standards of practice, topics closely related to the expanded access theme as well.
Later in the day, SC08 presents two Masterworks sessions related to HPC and the biosciences. HPC in the "Personalization" of Cancer Therapy: Genomics, Proteomics, and Bioinformatics examines the technologies needed to tailor therapy to the molecular profile of an individual patient's disease. Computational Opportunities in Genomic Medicine describes some of the challenging computational problems in basic biology and medicine, and outlines the software infrastructure that is needed to support this highly interdisciplinary field.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.