Visit additional Tabor Communication Publications
August 31, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
In odd move, Sun changes ticker symbol to JAVA;
NITRD soliciting comment on federal networking research plan;
Researchers build foundation for photo-transistors;
Startup offers up to 500 GB of network attached cache;
Purdue studies undergraduate parallel programming;
Intel readies 3 Penryns sporting 1600MHz FSBs;
>>AMD announces ISA extensions for HPC
AMD has announced they're adding new instruction set extensions designed to improve performance in HPC, multimedia, and security apps.
The extensions, called SSE5, evolve the Streaming SIMD Extensions introduced originally in 1999. Although AMD is making the specification available starting today to foster a dialogue with developers, they won't appear in product until AMD's Bulldozer core is available in 2009. (Really? Bulldozer?)
The Register dug into the spec a little (http://www.theregister.co.uk/2007/08/30/amd_sse5/):
For one, AMD will follow the RISC crowd with support for 3-Operand Instructions -- up from two. So, unlike in the past where you would do A plus B and then have to store the result of the operation in A or B, developers can now store the result in a third location. This should reduce the total number of instructions needed to perform certain tasks and require less effort on the part of developers to keep track of registers.
The support for 3-Operand Instructions allows AMD to roll out a "fused multiply accumulate" instruction as well. This melds multiplication and addition to permit "iterative calculations with one instruction."
Read the spec for yourself at http://developer.amd.com/sse5.jsp.
>>Coming soon to Omaha: 9,200 core dual boot cluster
The University of Nebraska at Omaha is building a new supercomputer. According to ClusterMonkey (http://www.clustermonkey.net//content/view/209/2/), the machine is reported to have 1,151 Dell servers with dual-socket Barcelonas for a total of over 9,200 cores. The system is slated to be a dual boot system offering both Linux and Windows CCS. More at http://insidehpc.com/2007/08/29/coming-soon-to-omaha-9200-core-dual-boot-cluster/.
>>Analyst digs in on Intel's HyperTransport copy
You've no doubt heard of Intel's Common System Interface by now. This is Intel's shot at moving its own architectures away from the front side bus in the way that AMD did with HyperTransport.
The Register reports (http://www.theregister.co.uk/2007/08/28/intel_csi_kanter/) on the work of analyst David Kanter at Real World Technologies, who has dug through patents and interviewed engineers to put together a detailed report on what he thinks CSI will look like. We won't get to hear Intel's plans for its technology until the IDF next month.
An excerpt from Kanter's report:
Unlike the front-side bus, CSI is a cleanly defined, layered network fabric used to communicate between various agents. These 'agents' may be microprocessors, coprocessors, FPGAs, chipsets, or generally any device with a CSI port.... Initial CSI implementations in Intel's 65nm and 45nm high performance CMOS processes target 4.8-6.4GT/s operation, thus providing 12-16GB/s of bandwidth in each direction and 24-32GB/s for each link.
You can read Kanter's report at http://realworldtech.com/page.cfm?ArticleID=RWT082807020032&p=1.
>>IBM introduces new cell blade
IBM announced this week the release of the QS21 cell blade. The 21 replaces the QS20 and includes two cell processors, more memory, dual GbE, and dual-port QDR IB. According to the company:
The IBM BladeCenter QS21 is one of the most power efficient computing platforms to date, generating a measured 1.05 Giga Floating Point Operations Per Second (GigaFLOPS) per watt. With its peak performance of approximately 460 GFLOPS, clients can achieve 6.4 Tera Floating Point Operations Per Second (TeraFLOPS) in a single BladeCenter chassis and over 25.8 TeraFLOPS in a standard 42U rack.
The line of cell blades are an attempt by IBM to move the benefits of HPC into the enterprise by providing exotic technology in a familiar package. (Links to more detailed coverage at http://insidehpc.com/2007/08/30/ibm-intro-news-cell-blade/.)
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.