Visit additional Tabor Communication Publications
November 10, 2008
Sunday and Monday during the conference feature a wide range of informative tutorials and thought-provoking workshops. These days before the conference begins in earnest can be a good time to settle in and make the transition from everyday work into a frame of mind where you can do something that is increasingly a rare activity: think strategically about how supercomputing and HPC fit into your business.
We've selected two tutorials on Sunday that deal with fundamental concepts in HPC.
If you are relatively new to HPC, or have come from the business side of a supercomputing center to a position of broader responsibility, S01: Parallel Computing 101 is the tutorial for you. One tutorial isn't enough time to become a ninja parallel coder, but S01 will provide the background you need to participate more meaningfully in conversations with your staff and colleagues. With 75 percent introductory material and 25 percent intermediate, the tutorial is aimed at students, managers and new practitioners who need a broad view of the techniques and technologies central to parallel computing from a user's perspective.
Those with some time in HPC are no doubt thinking... and worrying... a lot these days about the transition to multicore. If you're in that boat, S02: Application Supercomputing and the Many-Core Paradigm Shift is a tutorial you'll want to make plans to attend. At 50 percent introductory, 25 percent intermediate, and 25 percent advanced material, this tutorial isn't for every executive, but those with at least some technical background will benefit from the discussion of current and upcoming architectures, terminology, parallel languages, and development tools.
We've selected four workshops that tie in with this year's major themes.
On Sunday is the all-day workshop Power Efficiency and the Path to Exascale Computing. This workshop deals with two of our key themes, Computational Infrastructure and Computing at Scale, and will get you in touch with the thinking in the community today about building the facilities needed as technology moves beyond teraflops to petaflops and beyond.
Tying in closely with large-scale computational support is the product of computation: large data. The Petascale Data Storage Workshop on Monday will highlight new contributions in storage architecture, APIs, parallel file systems, and more for supporting the large amounts of data generated in today's supercomputing centers.
The half-day Workshop on Many-task Computing on Grids and Supercomputers focuses on management and execution of large-scale jobs. Organizers are building discussion around a new class of applications, which they call Many-Task Computing, characterized by computations involving multiple, distinct activities coupled by file systems or message passing.
If you are new to the idea of workflows, or find yourself thinking about how to improve the productivity of your user community, you will be interested in attending the The 3rd Workshop on Workflows in Support of Large-scale Science (WORKS08). This workshop will get you tuned up and ready to attend the workflow-related sessions later in the week if you plan to explore the Expanded Access theme.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.