Visit additional Tabor Communication Publications
September 18, 2008
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
Cray's new personal supercomputer;
New tool measures your HPC productivity against peers;
SGI delays reporting for fiscal year just ended;
Performing the Linpack on NCSA's #23 Win HPC Server machine;
Top500 launched in Asia;
Cluster Resources to Support Cray CX1;
Fujitsu Supercomputer Goes to History Museum;
HPC on Twitter;
Sun updates compute trailer;
TACC and the hurricane;
ClusterCorp Releases TotalView Rocks Roll;
>>McCain has answered Science Debate questions, too
McCain specifically calls out information technology research and computer science as important in a few of his answers. McCain says that he wants to invest in basic and applied research particularly in new and emerging areas and in information technology and will "support significant increases in basic research" at the various federal agencies -- but stopped short of saying he would fully fund the America COMPETES Act, in sharp contrast to Obama who has promised the doubling called for in that legislation.
You can (and voters should) read the whole thing at the Science Debate 2008 site, where they have placed McCain's responses alongside Obama's. As Melissa pointed out, and unlike Obama, McCain does refer to computers in his response: he uses the phrase "computer science" twice.
>>SiCortex Bumps Performance, Increases Energy Efficiency
SiCortex announced this week that it has doubled the price/performance ratio of its entire product line. The increase in efficiency comes by the way of a performance bump on the silicon, advancements in system software and leading-edge compilers. They released some initial TCO numbers compared to an Intel x86-based cluster.
"SiCortex has doubled its performance capacity without increasing the number of processors," said Christopher Stone, president and CEO of SiCortex. "Our computers now double the ROI at the time of purchase, and lower overall TCO by more than 60 percent over a three-year period."
For more info on the performance and efficiency increase, read the full release.
>>Appro partners with NEC
Appro has been in the news lately with wins in the DOE and Japan (University of Tsukuba). This week they announced a partnership with NEC. From the release:
Appro (http://www.appro.com), a leading provider of high-performance enterprise computing systems, and NEC Corporation, one of the world's leading providers of Internet, broadband network and enterprise business solutions, announce a strategic partnership today that will see Appro Xtreme-X Supercomputer and Appro Cluster Engine Software Management added and branded as part of NEC's HPC solution offering.
The partnership will enable Appro and NEC to work together towards a common goal; focus on reducing complexity of technology integration when deploying and managing integrated solutions, while lowering customer total cost of ownership (TCO). Commencing September 2008, Appro Supercomputer products will be added to NEC's HPC offering as a first step in this partnership.
Benefits to the participants? NEC gets a value-oriented solution and Appro's software (which is by all accounts good), and Appro gets access to the EMEA market where NEC is a much bigger presence than they are here in the U.S.
"This strategic partnership is a major breakthrough for Appro's supercomputers entry into the Europe, the Middle East and Africa (EMEA) HPC market. By combining NEC's strong technology base and market position in EMEA together with Appro's cluster deployment successes in the HPC market, the partnership will provide sustainable competitive advantages enabling both Appro and NEC to take greater advantage of this growing market segment," said Daniel Kim, CEO of Appro.
Although I wonder about the actual value of that access: as I read the announcement NEC will brand the Appro gear under NEC's own label. So while Appro will get revenue and deployment experience that will help them execute better on new business, they won't be directly growing their brand outside the US.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.