Visit additional Tabor Communication Publications
October 20, 2006
This week, Silicon Valley startup PANTA Systems unveiled its new server platform called PANTAmatrix. It is an x86-based platform that represents one of the new breed of servers that focuses on I/O performance and SMP configurability. It allows users to dynamically allocate I/O and computational resources across the cluster. A single PANTAmatrix system can support up to 9,000 processors as well as petabytes of storage.
The PANTAmatrix platform is based on an 8U chassis containing a mixture of vertical-oriented blades (or modules). Up to four of these 8U enclosures can go into a single rack. The architecture employs an integrated InfiniBand fabric to connect compute nodes with a shared I/O infrastructure. A single chassis can support two InfiniBand switch modules and up to eight AMD Opteron-based compute modules. Two compute modules can be paired together dynamically via a HyperTransport interconnect to support larger SMP nodes. Since an Opteron module may contain either two or four sockets (containing dual-core processors), nodes can configured to be 4-way, 8-way, or 16-way. Each compute module can hold up to 64 GB of memory, so a maximum of 128 GB per SMP node is possible. Interconnect bandwidth is allocated independently of the SMP size, with up to 12 GB/sec of bandwidth provided to a single node.
Alternatively, an Opteron-based compute module can be connected -- again via HyperTransport -- to an NVIDIA-based visualization module to produce a CPU-GPU node. Each visualization module contains 2 NVIDIA GPUs. Within an enclosure, up to four visualization modules can be accommodated. The inclusion of commodity graphics processing represents one of the first x86 server systems with this capability. The GPU can be used for either traditional visualization functions or application acceleration. More about this later.
PANTA also offers a 3U storage enclosure, providing up to three terabytes of capacity, connected via the InfiniBand fabric. It provides 800 MB/sec of sustained transfer rate per disk array. The high bandwidth is enabled by the PANTA storage agent using RDMA protocols for OS bypass.
Like the founders of Fabric7 and Liquid Computing -- companies offering similar types of architectures -- the folks at PANTA Systems are attempting to address the unbalanced nature of traditional x86 cluster platforms, where I/O and memory starvation can severely limit performance. This is especially true for data-intensive applications that require large memory footprints such as you would find in real-time analytics, financial services, seismic simulation, data warehousing and a variety of high performance technical computing applications.
The pursuit of this market space places PANTA in the growing legion of x86 server vendors that are challenging the domination of the big Linux machines. The PANTA systems aren't low-end platforms. They start at around $50K. The competition tends to be machines like HP SuperDomes, IBM Power5/Power6 servers, Sun UltraSPARC-based Sun Fire 15K/25K systems, and SGI Altix platforms. But PANTA thinks it can differentiate itself from the other high-end platform OEMs with superior system design.
Tung Nguyen, PANTA Systems founder and CTO, says data starvation is the key bottleneck today. He reminds us that back in the 1990s everyone was riding on the wave of the microprocessor. With the focus on processor performance, people forgot how to build balanced systems. While CPU improvement has been advancing at around 60 percent per year, memory and I/O performance have only improved 5 to 10 percent. Multi-core processors exacerbate the problem even more. So the challenge is how to feed all the compute engines. Nguyen says the solution is to add a lot more data pipes, while providing the ability to slice up computational resources into various sized SMP nodes and be able to allocate the I/O bandwidth independently across those nodes.
"Computer design is essentially about plumbing. I learned that from Seymour Cray in the '80s when I worked for him," says Nguyen, who worked at Cray from 1980 to 1987. "With six InfiniBand links coming out of one of our [switch modules] we have more plumbing than anybody else in the world. We have 3X the plumbing that you would get from IBM BladeCenter, HP blades, or any of those high-end systems that other people have. The maximum you see is a couple of InfiniBand links. We have six -- and believe me, we use all of them."
According to Nguyen, they've spent a lot of time with high-end HPC commercial customers, such as you would find on Wall Street. The incumbent hardware providers are usually IBM or HP, so PANTA has to prove themselves in that kind of environment. He says they're not trying to compete in a dollar/flop kind of game. Since PANTA uses commodity technology, large vendors like IBM and HP and even smaller companies like Rackable and Linux Networx can buy Opterons cheaper than PANTA can because of volume purchases. They have to use their advantage in I/O performance, configurability and overall system design to compete with the more established players.
"So if we run into a dollar/flop situation, we cannot win," explains Nguyen.
But actually the game changes somewhat when you're talking about GPUs says Nguyen. Today's high-end GPUs yield about 200 gigaflops of 32-bit floating point computing power. With the recent interest in stream computing for computational acceleration, and companies like PeakStream providing software support for such systems, the calculation is changing. So now PANTA is starting to believe that they can present a compelling dollar/flop story with their GPU modules.
Nguyen says that in the 1980s, Cray changed the face of computing with its early vector architectures. Since then, the industry has been focused on clustering architectures; there have been no real breakthroughs. But Nguyen believes that coupling GPUs with general-purpose processors is going to rearrange the landscape of computing.
"In the last two or three months, there's been a wave of publicity about stream computing -- AMD's acquisition of ATI and PeakStream's announcement [of its stream computing platform]," observes Nguyen. "One of the things about us that is not well known about (since we've been in stealth mode for quite awhile) is that we've been shipping systems with integrated GPUs since early 2005."
The University of North Carolina chose PANTA gear because the high I/O throughput and NVIDIA GPUs met the requirements of their biomedical simulation and analysis applications. Currently PANTA only supports NVIDIA devices, but since AMD is one of PANTA's key partners, they are starting to develop a relationship with ATI. The company appears to be planning for a more comprehensive offering of GPU technology.
"There's an enormous amount of compute power in one of these GPUs," says Nguyen. "I believe what [NVIDIA and ATI] are doing is very profound. In the near future you will probably be looking at close to a teraflop, in single precision, and maybe 200 to 300 gigaflops in double precision performance."
"This is just the beginning for us," concludes Nguyen. "Our next generation will arrive in six or nine months. We will improve the I/O capability and bandwidth of the system by a factor of four. That will enable us to build the kind of system that can harness the sustained computing power that is offered by technologies like GPUs."
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.