Visit additional Tabor Communication Publications
November 11, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
Penguin Computing Launches New Disk2Server Data Management Solution for HPC in the Cloud
AMD Outlines Roadmap at Financial Analyst Day
NVIDIA Names Three New 2010 CUDA Fellows
ERDC Home to New Cray XE6 Supercomputer
ScaleMP Releases vSMP Foundation 3.5
SGI Announces AMD Support for Altix ICE Supercomputers
Fujitsu Launches Global Initiative to Develop Mathematical Library for Petascale Computing
Platform Computing Upgrades HPC Management Offering
Convey Announces New Hybrid-core Computer
Georgia Tech Engaged in $100 Million DARPA Program to Develop Next Generation of High Performance Computers
AMAX Launches High-Density GPGPU Server Solution
Ohio Supercomputer Center, R Systems Merge Efforts to Aid Industry
Supercomputer Will Support Brazilian Global Climate System Model
New Research Provides Effective Battle Planning for Supercomputer War
Hardcore Computer to Showcase Liquid Submersion Computing Technology
Internet2 to Deploy First 100 Gigabit Ethernet Research Network
Georgia Tech Keeps Sights Set on Exascale at SC10
Supercomputing Conference Highlights NASA Earth, Space Missions
SGI to Unveil Breakthrough Hybrid Computing Platform at SC10
European Supercomputer Tera 100 Hits Petaflop Mark
This week, Bull announced that its European installation, Tera 100, officially reached petaflop performance. The system achieved 1.05 million billion operations a second, 1.05 petaflops of sustained performance based on the Linpack benchmark, with a peak performance of 1.25 petaflops. This success gives the Tera 100 the designation of being the number one supercomputer in Europe, and the system is expected to place near the top of the TOP500 listing, due to be announced next week at SC10.
The cluster system features 4,370 bullx S series servers powered by 17,480 Intel Xeon 7500 processors. Its more than 140,000 memory modules deliver a total capacity of 300 TB. And its 20 petabytes of storage space are accessible at a crowning speed of 500 GB/sec.
As a general-purpose supercomputer, Tera 100 was designed by Bull and CEA-DAM to run a wide range of applications, including healthcare, sustainable development and homeland security. The machine will also support the simulation program at the Military Applications Division (DAM). High levels of availability and reliability should maximize running time, enabling applications to be run virtually around-the-clock.
World's Largest Video Database of Proteins Published
Score another victory in the fight against deadly diseases. The world's largest video data bank of protein motions was published today to accelerate and facilitate the design of new pharmaceutical agents. It took scientists at the Barcelona Supercomputing Center four years of running advanced simulations on the MareNostrum supercomputer to achieve this feat. The new database, called MoDEL, holds more than 1,700 proteins and is partially accessible through the Internet so that researchers worldwide can share this valuable knowledge bank.
Modesto Orozco, head of the molecular modelling and bioinformatics group at IRB Barcelona, director of the Life Sciences Programme of the Barcelona Supercomputing Center and professor at the University of Barcelona, explains the significance of the project:
"Nowadays we design drugs as if the proteins against which they are to act were static and this goes a long way to explain failures in the development of new drug therapies because this is not a true scenario. With MoDEL this problem is solved because it offers the user from 10,000 to 100,000 photos per protein, and these confer movement to these structures and allow a more accurate design."
MoDEL currently represents 40 percent of human proteins with a known structure. Even more strikingly, it holds 30 percent of human proteins structures that are the most likely to be targets of a new drug, and the researchers aim to increase this number to 80 percent in two to three years.
The new database allows for drugs to be designed more efficiently. According to one researcher, several pharmaceutical companies are already using MoDEL to develop medications for the treatment of cancer and inflammatory diseases. These potentially-lifesaving therapies could become available this year.
A reference article on this work appeared in the November 10 issue of Structure. The project is supported by IRB Barcelona, the Barcelona Supercomputing Center, the Marcelino Botín Foundation, the Fundación Genoma España, the National Bioinformatics Institute and several European projects.
The Week in Numbers
SGI, Cray and NVIDIA all released financial statements in the past week or so with only NVIDIA turning a profit. Presented here are the key figures followed by prepared comments from company leadership.
SGI released results for its first quarter of fiscal 2011 on November 3. Notably, the hardware vendor shipped Altix UV systems to 49 customers and partnered with DARPA and Intel to work toward an exascale system. The company reported revenue of $112.9 versus 100.1 for the same quarter last year, a 12.8 percent increase. SGI's loss narrowed from the previous year's quarter, a loss of $11.2 million compared to a loss of $17.6 million.
Mark J. Barrenechea, SGI CEO, delivered the prerequisite spin and highlighted new products:
"Our Q1 results are a solid start to FY11 as we grew revenues and expanded margins quarter-to-quarter, with strong performance in the government and cloud industries. Customers are responding well to our new line of products, including Altix UV and COPAN. We are reaffirming our previously announced guidance."
Cray's reported revenue for the third quarter of fiscal 2010 was $42.8 million compared to $58.6 million in the prior year period, a decrease of 27 percent. Revenue for the first nine months of 2010 was was $100.0 million, about half what it was for the same period in 2009. It should also be noted that Cray's financial announcement was over a week late, with no explanation given.
Peter Ungaro, president and CEO of Cray, finds the silver lining, pinning financial hopes on the last quarter of 2010:
"While we have a lot of work left to do, we remain on track to deliver strong results for 2010, including revenue growth and profitability for the year. We have been shipping our new Cray XE6 supercomputers for the past several months and we are in the installation and acceptance process for all of the largest systems included in our 2010 outlook. In addition to our continued strength at the high-end of the supercomputing market, including exciting new wins at the University of Stuttgart and the University of Chicago, I am also pleased with the progress of our Custom Engineering initiative and the leverage it drives in our business model. Our momentum continues to build with the recent release of our latest generation Cray XE6 supercomputer and its two major upgrades planned for next year, and with our growing CE business we are positioned for continued growth and profitability."
NVIDIA's revenue for the third quarter of fiscal 2011 was down by 6.6 percent from the same period a year earlier, $843.9 million versus $903.2 million. Net income rose to $84.9 million from the second quarter's net loss of $141.0 million. Net income for the same period a year earlier was $107.6 million.
NVIDIA's President and Chief Executive Officer Jen-Hsun Huang explains why things are looking up for NVIDIA:
"We have turned the corner. We have restored our speed of execution and are regaining share in desktops. Only seven months after shipping our first processor based on the Fermi architecture, we have begun production on seven more GPUs, including the GeForce GTX 580, which sets a new standard for performance. The Fermi architecture is now in every segment of our desktop, notebook and workstation product lines.
"We've also made big strides this quarter in positioning ourselves at the center of cloud and mobile computing, which are transforming the computer landscape. Tesla now powers some of the world's fastest and greenest supercomputers. And Tegra will soon be featured in a range of smartphones and tablets we're building with our partners."
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.