Visit additional Tabor Communication Publications
November 10, 2008
Sunday and Monday during the conference feature a wide range of informative tutorials and thought-provoking workshops. These days before the conference begins in earnest can be a good time to settle in and make the transition from everyday work into a frame of mind where you can do something that is increasingly a rare activity: think strategically about how supercomputing and HPC fit into your business.
We've selected two tutorials on Sunday that deal with fundamental concepts in HPC.
If you are relatively new to HPC, or have come from the business side of a supercomputing center to a position of broader responsibility, S01: Parallel Computing 101 is the tutorial for you. One tutorial isn't enough time to become a ninja parallel coder, but S01 will provide the background you need to participate more meaningfully in conversations with your staff and colleagues. With 75 percent introductory material and 25 percent intermediate, the tutorial is aimed at students, managers and new practitioners who need a broad view of the techniques and technologies central to parallel computing from a user's perspective.
Those with some time in HPC are no doubt thinking... and worrying... a lot these days about the transition to multicore. If you're in that boat, S02: Application Supercomputing and the Many-Core Paradigm Shift is a tutorial you'll want to make plans to attend. At 50 percent introductory, 25 percent intermediate, and 25 percent advanced material, this tutorial isn't for every executive, but those with at least some technical background will benefit from the discussion of current and upcoming architectures, terminology, parallel languages, and development tools.
We've selected four workshops that tie in with this year's major themes.
On Sunday is the all-day workshop Power Efficiency and the Path to Exascale Computing. This workshop deals with two of our key themes, Computational Infrastructure and Computing at Scale, and will get you in touch with the thinking in the community today about building the facilities needed as technology moves beyond teraflops to petaflops and beyond.
Tying in closely with large-scale computational support is the product of computation: large data. The Petascale Data Storage Workshop on Monday will highlight new contributions in storage architecture, APIs, parallel file systems, and more for supporting the large amounts of data generated in today's supercomputing centers.
The half-day Workshop on Many-task Computing on Grids and Supercomputers focuses on management and execution of large-scale jobs. Organizers are building discussion around a new class of applications, which they call Many-Task Computing, characterized by computations involving multiple, distinct activities coupled by file systems or message passing.
If you are new to the idea of workflows, or find yourself thinking about how to improve the productivity of your user community, you will be interested in attending the The 3rd Workshop on Workflows in Support of Large-scale Science (WORKS08). This workshop will get you tuned up and ready to attend the workflow-related sessions later in the week if you plan to explore the Expanded Access theme.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.