Visit additional Tabor Communication Publications
August 17, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
NCAR fires up a new 2048 core Blue Gene/L;
AMD cranks up the clock on select dual core chips;
AMD creates entire website devoted to Intel's evils;
Intel launches 2 new quad-core Xeons;
TACC starts new international academic supercomputing consortium;
Mercury launches new visualization subsidiary;
>>DARPA HPCS Phase III to shed research funding
Recently, it has come to light that funding for productivity evaluation research under HPCS is going away after the end of Phase II. When asked for public comment from DARPA on this subject, DARPA spokesman Jan Walker responded that "Phase III is not a phase that requires as much research as past phases -- it is focused on development and building of a prototype. However, if Phase III contractors IBM and Cray want to fund researchers they certainly can."
The good news is that productivity evaluations on the HPCS systems will continue, but according to Walker "Some [of them] will be done by the vendors themselves. These evaluations will be a self-evaluation to make sure they are on track for productivity gains."
A cynical person might observe that, since the Phase III vendors IBM and Cray will be allowed to evaluate themselves, it would be a big surprise if they failed to reach the productivity goals in the contract. But not us...we're not cynical. No sir.
Read more of insideHPC.com reporter Mike McCracken's DARPA interview at http://insidehpc.com/2007/08/13/darpa-hpcs-phase-3-to-shed-research-funding/.
>>AMD releases spec for enabling real time optimization of apps
AMD released its Light-Weight Profiling spec this week, the first step in its Hardware Extensions for Software Parallelism initiative. From AMD (http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~118952,00.html):
LWP is designed to enable code to make dynamic and real-time decisions about how best to improve the performance of concurrently running tasks, using techniques such as memory organization and code layout, with very little overhead. These capabilities are particularly beneficial to runtime environments like Java and .NET, which can run multiple threads and are used to develop an increasingly large percentage of applications.
This step is targeted at the runtime, and the company is talking about it specifically in terms of "managed" runtime evinronments as in the Java and .NET examples cited above. But this is an interesting development, and ties in to research directions in the complier community where developers are beginning to explore schemes for using run time information to prove conditions on loops the compilers doesn't know enough about to parallelize at compile time in order to get the best performance.
According to AMD, the Hardware Extensions for Software Parallelism program
...will encompass a broad set of innovations designed to improve software parallelism, and thus application performance, through new hardware features in future versions of AMD processors.
>>Intel's Penryn set for November debut
The DailyTech is reporting (http://www.dailytech.com/article.aspx?newsid=8451) that Intel has set a launch date for Penryn, the next of the Core 2 line that currently includes Woodcrest and Conroe. Penryn is a process shrink, this time down to 45nm (we’ve talked about Penryn before; here and here) and those hi-k gates all the kids are talking about.
Intel has set the launch date for its Penryn based quad-core Xeon processor family. The company intends to launch seven new Harpertown based models ranging from 2.0-to-3.16 GHz on November 11, according to a posting on Intel's reseller webpage. Standard "E" bin and performance "X" bin processors launch on November 11.
Intel Xeon processors carrying the "E" designation feature 80-watt TDP ratings while the "X" bin processors have higher 120-watt TDP ratings. Intel does not plan to launch the low-power "L" models until Q1'08, with two models in the pipeline.
DailyTech found this info on a public page for Intel's resellers; no formal announcement has been made, which is odd for a company with an itchy press release finger.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.