ISC Beyond the Hans Meuer Era

By Nages Sieslack, ISC

June 19, 2014

Hans Werner Meuer (1936 – 2014†) and his legacy need little introduction within the high-performance computing (HPC) community. In Europe, he is known as the “Father of European supercomputing.” Hans, as he was fondly known in the community, became involved in data processing in 1960 and for the next 54 years of his life, he played various roles in the supercomputing world.

While acting as the director of the computing center and professor for computer science, in 1986, Hans organized the world’s first supercomputing conference – the Mannheim Supercomputer Seminar. It its inaugural year, it drew 81 attendees. For a long time, this modest-sized conference was the only event for supercomputer manufacturers to exhibit their products and for users to discuss their applications.

Fifteen years later Hans changed the name to the International Supercomputing Conference, ISC, and establish a more professional tone by moving the conference to modern venues within Germany. He also added an exhibition that brought in key industry players as sponsors.

Hans_MeuerIt was during the 1986 conference, with the help of a young colleague, Erich Strohmaier, that Hans started the “game” of publishing systems of the major supercomputer manufacturers of the day. At first, the list consisted only of the systems of vendors who attended the conference, regardless of relative compute speed. But due to the enormous performance difference between low-end and high-end models, the increasing availability of massively parallel processing (MPP) systems, and the strong increase in computing power of the high-end models of workstation suppliers, Jack Dongarra stepped in to help Hans add more structure to the list. In 1993, they started ranking the world’s most powerful computers according to the Linpack benchmark, a “standard” newly developed by Dongarra.

Hans’ inclination to continually refine whatever he was working on and explore new areas did not go unnoticed. Long-time friend and ISC program advisor, Horst Gietl was struck by Hans’ intellectual curiosity, recalling that he had little patience with conservative approaches. “He drew pleasure from discovering things for himself,” noted Horst. “He never was a follower of opinions of other people.”

It was this curiosity that led Hans to constantly inject new topics into the ISC program, attracting an ever-wider array of people. Today the event boasts 2,500 attendees. In five days, 300 speakers would present their topics of interest across 30 different sessions that run in parallel. Regular ISC attendees are surely able to recall Hans’ perennial opening quote, “This is our best conference ever!”

The tale of how Hans and Horst came to work together is an interesting one and exemplifies Hans’ ability for picking friends. Horst was an attendee of the Mannheim conference but they lost touch after he went to work on parallel processing systems used for video streaming. In 2006 they ran into each in Horst’s hometown, Munich, crossing paths for less than 10 seconds. Hans made it a point to reestablish the lost connection which later gave birth to a fruitful collaboration between them.

Some people claim that ISC will not be the same without Hans. His son Thomas agrees that his father’s winning personality will be sorely missed at ISC. But he noted, more than that, Hans was the figurehead. At the same time, ISC has evolved into a large multi-faceted event that takes scores of people to design the program and running the conference.

Recalling the amazing support, and condolences that poured in after Hans’ passing, Thomas is optimistic that the community will continue to see ISC as a significant HPC conference. “Many of our customers and attendees have personally conveyed to us that ISC is a must-attend event for them,” he said. “The reason is quite simple. We offer a quality program and also ample networking opportunities.”

That’s not to say the conference will remain static. Horst remarked that this year’s program is more application-oriented than in previous years. A number of sessions will focus on the “the real value of HPC,” theme including topics such as visualization, HPC in life sciences, extreme computing challenges, cloud computing, and trends for big data in HPC. There will also be a session in the industry track that discusses support structures for HPC in commercial enterprises.

Nevertheless, Horst noted that the 2014 program will also offer many sessions on more traditional supercomputer topics like programming models, future supercomputer directions, quantum computing, fault tolerance and resilience, performance measurement tools and power challenges.

“I would say the ISC’14 program is an interesting mix of HPC topics, which we hope will motivate the supercomputer community to join us in Leipzig,” said Horst.

The shifting balance between commercial and non-commercial HPC will be reflected in the conference. Over the next five years, he expects industrial HPC to gain more “attention” in the program. ISC introduced the industry track in 2013 and this year the focus is on commercial innovation via HPC technologies. Horst is hoping to see more simulation engineers and independent software vendors attending the two day program. “It is a known fact that the HPC requirements for the industry will grow for them to stay competitive in a globalized world …life science and finance are some examples,” explained Horst. The same is also true for social networks that require large HPC-systems to extract and analyze valuable information from the rapidly growing data volumes.

Another ISC change on the horizon is the conference venue. In 2015, which will mark ISC’s 30th anniversary, the event will be held in Frankfurt. For logistical reasons it will take place from July 12-16, breaking from the tradition of hosting it in June. According to Thomas the city is perfect for the next ISC. It offers a very modern convention facility, a huge range of hotels, perfect transportation (hint: Frankfurt Airport) and a vibrant downtown area within walking distance.

Asked about what will be new in 2015, Thomas was willing to offer this bit of information: “Since Frankfurt is one of the world’s most important financial centers, at least one session will be dedicated to financial services and its use in HPC cloud services. Furthermore we will be extending our scientific program and for the first time we will be offering a workshop day.”

Bernd Mohr, a senior scientist at the Juelich Supercomputing Centre whose work focuses on performance analysis of parallel software, joined the ISC program team a couple of months back as the future ISC Workshop Chair. Questioned on the need for a full day devoted to workshops in 2015, he explained that neither the conference session chairs nor presenters like having interesting workshops competing with their audience. “While they feel that workshops are stealing their audience, workshop organizers feel that they need to compete with the conference program, and the attendees complain that there is too much going on in parallel,” said Bernd. “Workshops will be ideal for those who always thought BoF sessions are too short to present and discuss their proposed topic,” he continued. “They are also ideal for European and international research projects that want to present their research results to a larger audience.”

Bernd is no stranger to the HPC scene or the ISC conference. He has spoken a number of times in the main conference session. “The Future of Performance Optimization Tools” was the topic of his first invited talk, which he admits he almost messed up because he was so nervous standing on the stage before the audience. Over the years he gradually became more involved in the conference as Hans noticed Bernd’s enthusiasm as a presenter and his strong interest in the conference. Bernd initially advised Hans and Horst as a freelance consultant, and in 2012 Hans offered Bernd a permanent post since he wanted him on the program team to improve the quality of the technical program.

Because of the July date for ISC’15, it looks like TOP500 fans will have to wait an extra month for the announcement of the 45th edition of the list in 2015! Regardless of the timeframe, the Linpack benchmark is often a subject of controversy in the community and undoubtedly this will continue to be the case in 2014, 2015, and beyond.

While the Linpack benchmark may not be as relevant in the much more diverse HPC application landscape that exists today, Thomas maintains that the simplicity of the list is its biggest advantage. “My assumption is that the current metric will continue to be the leading benchmark for the TOP500 list,” he said. “Although other benchmarks might be better suited for specific application problems, Linpack provides one single number and is easy to understand.”

According to Bernd, there is also the historical value of the list, which has provided a consistent way of measuring computer performance for over 20 years. As a result it has been invaluable in predicting the performance trajectory of supercomputers and analyzing architectural trends. But he also confesses that the Linpack benchmark is too FLOPS-centric and misses other important aspects of current HPC applications: “For me, the real value of HPC are the applications using these systems that solve real-world societal problems and these results and successes are not emphasized enough.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This