Visit additional Tabor Communication Publications
August 10, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
The Scouts get into HPC, new badge in works?;
Mercury Computer Systems releases PS3 SDK;
ANSYS used by ITER in building first fusion facility;
NVIDIA announces new GPU server;
Chelsio and Woven demonstrate 10GbE RDMA solution at Sandia;
COMPETES Act makes HPC R&D Act provisions law;
The Portland Group announces development tools target quad-core Opterons;
>>NSF Announces giant and huge supercomputers
NSF has formalized the announcements that were leaked on the internet last week and reported in the New York Times (http://insidehpc.com/2007/08/07/nsf-petascale-story-in-the-nyt/).
University of Illinois at Urbana-Champaign (UIUC) will get $208 million over the next 4.5 years to build a petascale machine named "Blue Waters." The vendor wasn't announced in the release but the name might offer a hint that confirms IBM as reported in the NYT piece.
And the University of Tennessee at Knoxville Joint Institute for Computational Science (JICS) will get $65 million over 5 years to build the Track 2 machine. The UT project includes partners at Oak Ridge (as was rumored), as well as TACC and NCAR.
No official word at this time from NSF or the awardees on what machines are proposed.
>>IBM packs up running apps and moves them around
IBM has started talking about its new Live Partition Mobility technology, now in beta. From their release (http://www-03.ibm.com/press/us/en/pressrelease/22005.wss):
Live Partition Mobility, currently in beta testing with general availability planned later this year, is a continuous availability feature that will enable POWER6-based servers, such as the System p 570, to move live logical partitions -- including the entire operating system and all its running applications -- from one server to another while the systems are running.
This sounds similar to the functionality that Evergrid is currently working on, some of which is deployed. While I believe that neither offering requires users to change their codes, the IBM solution is supported in hardware.
Because Live Partition Mobility is implemented in the POWER6 chip, hardware and its associated firmware, the feature is operating system independent, allowing the movement of AIX or Linux operating systems and associated running workloads. For instance, using Live Partition Mobility customers will be able to dynamically consolidate UNIX or Linux workloads -- without interruption -- onto fewer servers during off-peak times, allowing them to turn off computers and save energy.
The company is talking about this functionality in terms of business apps, of course, but HPC centers could use it to juggle load among machines for better balance, or to make room for high priority applications.
>>Sun launches "world's fastest commodity microprocessor," at 1.4 GHz
Sun is basing that claim on SPEC performance, not clock speed. They are talking, of course, about the UltraSparc T2, formerly known as the Niagara II chip. The chip has 8 cores (the same as the Niagara I), but now has 8 threads per core and a floating point unit with each core (instead of 4 threads per core and 1 FPU per chip). The chip is designed to kick virtualization butt.
The DailyTech has a good, quick overview at http://www.dailytech.com/Sun+Announces+New+UltraSPARC+T2+as+Worlds+Fastest+Microprocessor/article8340.htm.
Interestingly, Sun is going to open source the thing:
Having surpassed 5,500 downloads of the OpenSPARC T1 source code, Sun is working to release source code for the UltraSPARC T2 processor to the OpenSPARC community at www.opensparc.net.
>>The Green Grid announces strategy, timelines
The Green Grid yesterday released details of its near term plans. After getting started just six months ago, the group has come a long way, putting together a solid technical strategy and the core teams necessary to get the job done.
At a press event before the release yesterday, the group reiterated its bottom up focus (partly in response to questions about criticisms from the Gartner Group): start with the data center now, where they feel they can make a near-term impact and get their hands around the problem, and work out to larger issues over the long term when there is a contribution to be made.
The group seems very focused on practical steps, which is refreshing and to my mind indicates that they might actually accomplish what they're setting out to get done. I've outlined their strategy along with the initial deliverable schedule at http://insidehpc.com/2007/08/08/the-green-grid-announces-strategy/. The schedule is ambitious: the group plans to deliver nine major documents between now and the end of the year.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.