Visit additional Tabor Communication Publications
October 30, 2008
How do we move high-performance computing forward? At Intel, we are producing technologies that enable major breakthroughs in science, engineering, medicine, and an array of other fields. At the same time, we are helping to make it simpler and more affordable for organizations to get involved with high-performance computing, from small and medium-sized businesses that need cost-effective systems to large-scale data centers that use HPC to solve new problems. But where do we go from here?
In this new series of articles, HPC@Intel: Moving HPC Forward, we will share new and innovative ideas for solving today's -- and tomorrow's -- key challenges in HPC. In the first few months, we will explore strategies for scaling performance forward, evaluate when to say no to parallelism, and explain why balanced systems can deliver performance that rivals systems with alternative architectures. Over the course of the series, we will show you how Intel is advancing the state of HPC while also bringing HPC to a wider range of users.
Advancing HPC at Intel
Intel® architectures are the choice for more than 75 percent of the world's Top500 HPC systems. But our contribution to HPC extends well beyond the production of high-performance processing architectures.
Creating balanced systems is a top priority at Intel. We know that sustainable HPC performance can be achieved only by balancing processor capacity with memory capacity and I/O bandwidth. We are helping to develop those balanced systems and to produce components that deliver significant performance gains for HPC applications.
We are also conducting upstream research on software and hardware technologies to accelerate multi-core and many-core architectures. We are bringing memory capacity closer to the cores, exploring new interconnect strategies, and examining new network fabrics and network packaging technologies. This research has already enabled us to introduce several new technologies into the HPC industry.
We are working to optimize power usage for HPC, not only at the processor and board level but also at the rack and data center level. More than 85 percent of our internal servers are HPC systems. Running those systems has taught us how to optimize power and cooling for large data centers. Now we can achieve very high power density without liquid cooling by employing careful warm and cold air management and other optimizations. We have shared and will continue to share that information with partners and end users.
Meanwhile, we provide a rich portfolio of software tools for HPC. The Intel® Cluster Toolkit includes compilers, performance analysis tools, threading tools, and libraries, such as the Intel® Math Kernel Library and Intel® MPI Library. These tools help developers scale performance forward through focused, surgical changes to code. We also offer deep software expertise. With software engineers specializing in key HPC segments, such as manufacturing, oil and gas, and financial services, we work together with industry players to optimize HPC applications.
We are also heavily involved in education. Intel has partnered with 800 universities around the world to develop curricula that will help tomorrow's software engineers develop parallel software code for HPC. A future article will detail what we are doing to make sure future developers have the skills to write code for thousands of threads running on large multi-node systems.
Working with the HPC ecosystem
We realize the importance of partnering with hardware and software vendors throughout the ecosystem to provide end users with the tools they need to succeed. We work with software developers to help optimize their applications, middleware, and drivers for current and future Intel architectures. For example, we recently released the Intel® Software Development Emulator (Intel® SDE) to support Intel® Advanced Vector Extensions (Intel® AVX), which will be introduced with the forthcoming "Sandy Bridge" processor. Intel compiler and Intel performance library support for Intel AVX will be available in early Q1 2009.
Charting the road ahead
The Intel "tick-tock" model for processor technology innovation provides the predictability that partners and end users need to maximize the return on their HPC investments. On the most recent "tick," we introduced the 45-nm process technology, which helped deliver better performance and energy efficiency in a smaller version of an existing microarchitecture. In 2009, we will start production on the next-generation 32-nm silicon process technology.
This year's "tock" will capitalize on the 45-nm technology to introduce the "Nehalem" microarchitecture, which will deliver important benefits for HPC customers. Going forward, our partners and end users can continue to count on this beat rate for innovation as they plan their HPC investments.
The tempo of these articles will be even quicker. In coming months, we will evaluate how forward scaling can address the challenges of developing software for new core counts and the inevitable enhancement of the instruction set. We will also examine the transition to parallel architectures: When should software developers say "no" to parallelism? Plus, we will consider offload options: As you optimize your code, will investments in offload options deliver the long-term ROI you need?
We will also discuss the growing use of HPC by small and medium-sized organizations, and show how Intel and our ecosystem partners are working together to make it easier for these organizations to use HPC. Collaborative programs such as Intel® Cluster Ready are lowering the barriers to HPC by helping to ensure interoperability, simplify procurement, reduce time to productivity, and decrease the total cost of system ownership.
This is an exciting time to be part of the HPC community. We look forward to showing you how we are moving HPC forward.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.