Visit additional Tabor Communication Publications
August 17, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
NCAR fires up a new 2048 core Blue Gene/L;
AMD cranks up the clock on select dual core chips;
AMD creates entire website devoted to Intel's evils;
Intel launches 2 new quad-core Xeons;
TACC starts new international academic supercomputing consortium;
Mercury launches new visualization subsidiary;
>>DARPA HPCS Phase III to shed research funding
Recently, it has come to light that funding for productivity evaluation research under HPCS is going away after the end of Phase II. When asked for public comment from DARPA on this subject, DARPA spokesman Jan Walker responded that "Phase III is not a phase that requires as much research as past phases -- it is focused on development and building of a prototype. However, if Phase III contractors IBM and Cray want to fund researchers they certainly can."
The good news is that productivity evaluations on the HPCS systems will continue, but according to Walker "Some [of them] will be done by the vendors themselves. These evaluations will be a self-evaluation to make sure they are on track for productivity gains."
A cynical person might observe that, since the Phase III vendors IBM and Cray will be allowed to evaluate themselves, it would be a big surprise if they failed to reach the productivity goals in the contract. But not us...we're not cynical. No sir.
Read more of insideHPC.com reporter Mike McCracken's DARPA interview at http://insidehpc.com/2007/08/13/darpa-hpcs-phase-3-to-shed-research-funding/.
>>AMD releases spec for enabling real time optimization of apps
AMD released its Light-Weight Profiling spec this week, the first step in its Hardware Extensions for Software Parallelism initiative. From AMD (http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~118952,00.html):
LWP is designed to enable code to make dynamic and real-time decisions about how best to improve the performance of concurrently running tasks, using techniques such as memory organization and code layout, with very little overhead. These capabilities are particularly beneficial to runtime environments like Java and .NET, which can run multiple threads and are used to develop an increasingly large percentage of applications.
This step is targeted at the runtime, and the company is talking about it specifically in terms of "managed" runtime evinronments as in the Java and .NET examples cited above. But this is an interesting development, and ties in to research directions in the complier community where developers are beginning to explore schemes for using run time information to prove conditions on loops the compilers doesn't know enough about to parallelize at compile time in order to get the best performance.
According to AMD, the Hardware Extensions for Software Parallelism program
...will encompass a broad set of innovations designed to improve software parallelism, and thus application performance, through new hardware features in future versions of AMD processors.
>>Intel's Penryn set for November debut
The DailyTech is reporting (http://www.dailytech.com/article.aspx?newsid=8451) that Intel has set a launch date for Penryn, the next of the Core 2 line that currently includes Woodcrest and Conroe. Penryn is a process shrink, this time down to 45nm (we’ve talked about Penryn before; here and here) and those hi-k gates all the kids are talking about.
Intel has set the launch date for its Penryn based quad-core Xeon processor family. The company intends to launch seven new Harpertown based models ranging from 2.0-to-3.16 GHz on November 11, according to a posting on Intel's reseller webpage. Standard "E" bin and performance "X" bin processors launch on November 11.
Intel Xeon processors carrying the "E" designation feature 80-watt TDP ratings while the "X" bin processors have higher 120-watt TDP ratings. Intel does not plan to launch the low-power "L" models until Q1'08, with two models in the pipeline.
DailyTech found this info on a public page for Intel's resellers; no formal announcement has been made, which is odd for a company with an itchy press release finger.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.