Visit additional Tabor Communication Publications
June 11, 2008
In all the excitement about the Roadrunner petaflop announcement this week, a bunch of other HPC news got pushed aside. One item that caught my eye was the announcement by the Canadian High Performance Computing Virtual Laboratory (HPCVL) that it had purchased a cluster made up of 78 Sun SPARC Enterprise T5140 servers, which is not a product you hear much about in the HPC space. In fact, it may be the only production system of its kind at an HPC facility.
The T5140 is a dual-socket server that uses the 8-core UltraSPARC T2 processor ("Niagara 2"). The T2 is the follow-on to the T1, which had only one floating point unit shared across its eight cores. When the T2 came along in 2007, Sun had remedied this by adding an FPU to each core, thereby making it suitable for technical computing.
The big deal about the T2 is that it offers lots of throughput in a very energy-efficient package. That's why the T2 servers are aimed mostly at enterprise users with scaled-out Web or data warehouse applications who want to consolidate resources. Since each processor core can handle 8 threads, the 78-node cluster the Canadians bought can juggle almost 10,000 simultaneously. Not bad for less than a 100 nodes.
The knock on the T2, at least for HPC, is a lack of raw performance. Each processor yields no more than 10 gigaflops or so on Linpack, mainly due to relatively slow clock speeds offered with the processor -- in the 0.9 to 1.4 GHz range. If an application can mostly run out of cache, Xeon or Opteron-based machines are going to outperform the UltraSPARC pretty handily.
Where T2 really shines is on highly multi-threaded codes that are limited by memory bandwidth, which is fairly common in real HPC codes. A good example is a PDE (partial differential equations) solver. In these cases, the T2 can make excellent use of the four on-chip memory controllers to speed access to RAM. Aggregate memory bandwidth per chip is advertised at 60+ GB/sec.
Last year, HPC researchers at Aachen University's Center for Computing and Communication (CCC) evaluated a pre-production system of a single-socket T2-based server against Woodcrest, Opteron, and UltraSPARC IV systems, using a number of benchmarks and application codes.
According to them, "The UltraSPARC T2 processor offers an amazing memory bandwidth, if multiple threads can be employed. And when parallelizing with OpenMP, the placement of threads and data is not critical, and also Solaris does a superb job in this respect already, whereas Linux on the Xeon and Opteron based system requires user attention." The complete evaluation by the Aachen group is available here.
If this kind of capability were encapsulated in an x86 processor, these would indeed be popular little chips today. The closest we'll get to an x86 version of the T2 will probably be a low-power, 8-core, Intel Nehalem processor sometime in 2009.
But by that time, Sun is expected to be offering its next-generation SPARC processor, called "Rock." Rock is a 16-core processor that will represent an entirely new architecture. The company says both thread performance and floating point performance will be better than the T2, and the processor will support new technologies like transactional memory and "scout threads." Sun originally wanted to deliver the processors this year, but is now targeting introduction for the second half of 2009.
Posted by Michael Feldman - June 10, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.