The Weekly Top Five

By Tiffany Trader

February 3, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the computing power on display at SC10’s Student Cluster Competition; the University of Portsmouth’s new supercomputer; IBM Watson’s Linux platform; multicore advances at North Carolina State; and Intel’s new approach to university funding.

SC10’s Student Cluster Competition Raises the Bar

The student team from Louisiana State University was one of three teams to break the teraflop barrier at SC10’s Student Cluster Competition. This is the first year that any team has achieved that distinction, and the honor is shared with teams from the University of Texas and National Tsing Hua University (Taiwan).

In the Student Cluster Competition at SC10, which took place in New Orleans in November, eight teams gathered from around the country and from as far away as Russia and Taiwan to design and build clusters that solve real-world problems. The teams prepared for months working with their advisors and vendor partners. Winning teams were selected by a panel of experts, based on visualization output, presentations and interviews.

The LSU students received vendor support from HP and LATG, Mellanox, Portland Group and Adaptive Computing and were advised by Isaac Traxler, Unix Services Manager at LSU’s High Performance Computing (HPC) and Center for Computation & Technology. Under Traxler’s tutelage, the students spent one night a week for six months working on the project, in addition to many hours spent working on their own.

With 144 cores, the LSU cluster executed the competition’s four open source applications while staying within the 26 Amp constrained power limit.

UK-Based Supercomputer to Further Cosmic Reaserch

Scientists at the University of Portsmouth are about to get a new supercomputer, one that has the equivalent strength of approximately 1,000 desktop systems. The system will give cosmologists an edge when it comes to understanding galaxy formation and even the origin of gravity itself.

Named “SCIAMA,” the 1,008-core cluster was built by Dell and designed to process large amounts of astronomical data very quickly. Researchers at the University’s Institute of Cosmology and Gravitation (ICG) will use the cluster to solve complex cosmological problems, like simulating vast regions of the universe and exploring the properties of hundreds of millions of galaxies.

The supercomputer was named in honor of Dennis Sciama, a leading figure in the astrophysics and cosmology community. SCIAMA is also an acronym for SEPnet Computing Infrastructure for Astrophysical Modelling and Analysis.

Gary Burton, ICG’s senior specialist technician and the person who will soon be managing the supercomputer, explained that “the huge power of a supercomputer like SCIAMA is necessary to deal with the vast amount of observational data coming from satellites, telescopes and other detectors. Using it will allow us to explore the whole of cosmic history and analyse data that contains fundamental clues about the origins of the Universe.”

Watson Supercomputer Is SUSE Linux Machine

SUSE Linux is about to get its 15 minutes of fame. Novell announced this week that IBM Watson’s DeepQA software is running on SUSE Linux Enterprise Server 11. Watson is the supercomputer that will soon have the distinction of being the first non-human Jeopardy contestant. The novel tournament takes place Feb. 14-16.

Watson contains more than 200 million digital pages of information and operates at a speed of over 80 teraflops. IBM has designed Watson with a combination of deep analytics and rapid processing speeds that can make sense of the kind of “natural language” questions that are at the core of this popular primetime gameshow.

Linux has a long history of use in the field of high performance computing, and this is spelled out in the announcement:

Watson’s “Jeopardy!” appearance serves as further validation of the advantages of Linux in high-performance computing environments, as Linux has long been regarded as the operating system of choice among the fastest and most complex environments in the world. In the latest TOP500 list of the world’s most powerful supercomputers, 459 are running Linux and six of the top 10 systems are based on SUSE Linux Enterprise or a derivative of it.

NC State Research Team Speeds Chip Communication

North Carolina State University researchers have developed a hardware technology, called HAQu, that boosts software performance by enabling chip-to-chip communication. In multicore setups, the core communication is rather inefficient, with the chips using memory as the “third-party” intermediary. If the chips could communicate with one another directly, it would save a lot of time.

The computer engineers have detailed their findings in a paper, called “HAQu: Hardware-Accelerated Queueing for Fine-Grained Threading on a Chip Multiprocessor, (PDF)” which will be presented at the International Symposium on High-Performance Computer Architecture in San Antonio, Texas, on Feb. 14.

Dr. James Tuck, an assistant professor of electrical and computer engineering at NC State and co-author of the paper, explained in the university’s announcement that the “technology is more efficient because it provides a single instruction to send data to another core, which is six times faster than the best state-of-the-art software” (that the researchers could find). He went on to state that HAQu is “not hardware designed to communicate data on its own, but is hardware that expedites data-sharing using existing data paths on a computer chip.”

Even though it is a piece of hardware, HAQu is similar to software communication tools in that it is able to leverage a chip’s existing data paths. It is also reduces energy draw. Despite using more energy, it runs more quickly, resulting in a net decrease in consumption.

The same research team was responsible for a parallelization technique that could enable common computer programs to run up to 20 percent faster. The non-traditional approach works on programs that are normally difficult to parallelize, such as as word processors and Web browsers, by running memory-management functions on a separate thread. That work has also been written up as a paper (PDF).

Intel Labs Commits $100 Million to University Research

Intel Corp. announced intentions to invest $100 million into US university research over the next five years. With this new model, funding to researchers could increase five-fold.

Intel Labs will launch multiple Intel Science and Technology centers over the coming year in an effort to boost innovations in computing and communications. The centers will pursue advances in visual computing, mobility, security and embedded solutions. Stanford University will host the first center, with a focus on creating visualization solutions for both consumer and professionals.

From the release:

This first Intel Science and Technology Center, as well as those that will follow later this year, represents a new model of collaboration for the company. Until now, Intel Labs ran open collaboration centers near research universities and a substantial portion of the company’s funding focused on operating, maintaining and staffing these facilities. The new centers will be Intel-funded and jointly led by Intel and university researchers. They are designed to provide more dollars in the hands of researchers, and to encourage tighter collaboration between academic thought leaders in essential technology areas such as visual computing, security and mobile computing. For maximum flexibility, Intel will be able to tune its research agenda across the research centers over time. Intel plans to invite proposals from the academic community to continue pursuing the creation of additional Intel Science and Technology Centers.

Read more about the Stanford-based center, here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

ESnet Now Moving More Than 1 Petabyte/wk

December 12, 2017

Optimizing ESnet (Energy Sciences Network), the world's fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal t Read more…

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

HPC-as-a-Service Finds Toehold in Iceland

December 11, 2017

While high-demand workloads (e.g., bitcoin mining) can overheat data center cooling capabilities, at least one data center infrastructure provider has announced an HPC-as-a-service offering that features 100 percent fre Read more…

By Doug Black

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This