HPCwire Debuts Outstanding Leadership Awards at SC15

By John Russell

November 16, 2015

This year HPCwire has established a new category within its Annual Readers and Editors Choice Awards program to recognize Outstanding Leadership in HPC. We realize there is no single preeminent leader in HPC and that’s a good thing. The diversity of opinion and expertise is a major driver of progress. There are many individuals whose accomplishment and influence within HPC (and beyond) represent important leadership and are deserving of recognition.

In that vein, we think the inaugural group of nominees well represents the impressive work and achievement that everyone in the HPC community aspires to. The group encompasses a wide range of disciplines and roles, all of which are necessary to advance HPC and its impact on society. So while HPCwire readers and editors have already selected “winners” – you’ll have to discover them elsewhere – it’s entirely appropriate to shine a spotlight on all of the nominees.

The 2015 HPCwire Outstanding Leadership nominees include:

  • Jack Dongarra, University of Tennessee
  • Patricia K. Falcone, Lawrence Livermore National Laboratory
  • Rajeeb Hazra, Intel
  • Satoshi Matsuoka, Tokyo Institute of Technology
  • Horst Simon, Lawrence Berkeley National Laboratory
  • Thomas Sterling, Indiana University
  • Rick Stevens, Argonne National Laboratory
  • Pete Ungaro, Cray
  • Gil Weigand, Oak Ridge National Laboratory
  • Thomas Zacharia, Oak Ridge National Laboratory

These are of course familiar names within the HPC community. HPCwire asked each of the nominees to submit a short bio and to answer two questions: 1) Within your domain of expertise, what do you see as the biggest technology challenge facing HPC progress and how is that likely to be overcome? 2) Tell us something that few know about you with regard to your interests and what recharges you outside of work.

Their answers, which we present here, are as diverse as the group – who knew Satoshi Matsuoka was a Karaoke fan or that Thomas Zacharia has a passion for his vintage Jaguar coupe or that Horst Simon recently took up surfing! – Enjoy.

Jack DongarraJack Dongarra

Short Bio:
Dongarra is a University Distinguished Professor in the Electrical Engineering and Computer Science Department at the University of Tennessee and researcher at Oak Ridge National Lab. He is author of the LINPACK benchmark and co-author of the TOP500 list and the High Performance Conjugate Gradient Benchmark (HPCG), Dongarra as been a champion of the need for algorithms, numerical libraries, and software for HPC, especially at extreme scale, and has contributed to many of the numerical libraries widely used in HPC.

HPC Challenge & Opportunity:
While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision. We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.

What You Do To Recharge:
Outside of my research into HPC I enjoy photography and watching and interacting with our two grandchildren.

 

Patricia K. FalconePatricia K. Falcone

Short Bio:
Falcone is the Deputy Director for Science and Technology at the Lawrence Livermore National Laboratory (LLNL) in Livermore, California. She is the principal advocate for the Laboratory’s science and technology base and oversees the strategic development of the lab’s capabilities. A member of the senior management team, she is responsible for the lab’s collaborative research with academia and the private sector, as well as its internal investment portfolio, including Laboratory Directed Research and Development.

HPC Challenge & Opportunity:
In my view, the biggest challenge facing HPC progress is unlocking the creativity and innovation of talented folks across multiple domains including industry, academia, and laboratories and research institutes, in an integrated manner. There are both big challenges and big opportunities, but neither the challenges will be met nor the opportunities realized without focused efforts to push boundaries as well as targeted collaborations that yield benefits in myriad application spaces. Also necessary is bringing along talent and interest among young scholars, benefiting from creative research and technology disruptions, and working together to achieve ever increasing performance and enhanced impacts for scientific discovery, national security, and economic security.

What You Do To Recharge:
Personally, outside of work I enjoy family and community activities, as well as reading and the arts.

 

Rajeeb HazraRajeeb Hazra

Short Bio:
Hazra is Vice President, Data Center Group & General Manager of Enterprise and HPC Platforms Group is responsible for all technical computing across high-performance computing and workstations. Hazra has said that this new world of “HPC Everywhere” will require unprecedented innovation, which ties directly into Hazra’s driving of Intel’s code modernization efforts and investment in its Intel Parallel Computing Centers Program.

HPC Challenge & Opportunity:
Let me approach this question from a different angle. You’ve asked what I see as the biggest technology challenge facing HPC progress, but the barrier – the hurdle – is much bigger than any one technology.  Traditional HPC is evolving into a new era with computational and data analytics capabilities we’ve never come close to experiencing before. The biggest hurdle we face in driving HPC progress is one of rethinking our approaches to problem solving and educating and training the community at large to better understand how to use the evolving HPC platforms and take full advantage of unprecedented levels of parallel performance.

With that being said, the technical challenge is to re-architect HPC systems in such a way as to achieve significantly reduced latency, orders of magnitude performance improvement in bandwidth, and deliver balanced systems that can accommodate both compute- and data-intensive workloads on the same platform.  HPC elements such as fabric, memory, storage, continue to evolve and get better every year.  But architecting future HPC systems to be able to integrate the latest elements as they become available is a new direction.  With the right system framework and the latest innovations in processors, memory, fabric, file systems and a new HPC software stack, we are setting the stage for an extended period of rapid scientific discovery and tremendous commercial innovation that will change the landscape of private industry, academia and government research.

We also believe HPC progress will be shaped by the field of machine learning, an area we have been researching at Intel labs for several years.  You will be hearing a lot more about machine learning throughout the rest of this decade and Intel is fully committed to driving leadership in this exciting area.

Most people are starting to recognize that Intel has evolved our HPC business over the years to be so much more than just a processor company. Everything I’ve mentioned in this discussion refers to foundational elements of the Intel® Scalable System Framework, our advanced architectural approach for designing HPC systems with the performance capabilities necessary to serve a wide range of workloads such as traditional HPC, Big Data, visualization and machine learning.

This holistic, architectural approach is indeed the industry-changing technical challenge but one that we are well on the way to solving with our Intel® Scalable System Framework.

What You Do To Recharge:
I think like most people in this industry, I deeply value the time I get to spend with my family. I travel a great amount, so my down time is usually not scripted. I like to explore various interests and it’s often something spontaneous that sparks my passion at any given time. I thoroughly enjoy photography and find it both relaxing and stimulating. One thing most people wouldn’t know about me is my passion for music. The right music can lift your spirits and change your perspective. I enjoy listening to emerging music artists from around the world, and I appreciate all types of music but particularly fusion.

 

Satoshi MatsuokaSatoshi Matsuoka

Short Bio:
Matsuoka is a professor at Tokyo Institute of Technology (TITech) and a leader of the TSUBAME series of supercomputers. Matsuoka built TITech into an HPC leader, adopting GPGPUs and other energy efficient techniques early on and helping put Japan on a course for productive exascale science. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, the ACM Gordon Bell Prize for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.

HPC Challenge & Opportunity:
Over the long term the biggest challenge is the inevitable end of Moore’s law. HPC has been the leader in accelerating computing for the past decades, whereupon we have seen x1000 increase in performance every 10 years. Such an exponential trend has allowed innovative, qualitative, and in fact revolutionary changes in the science and engineering enabled by HPC, as well as normal computing where devices such as smartphones which would have been science fiction items to become a ubiquitous reality, again revolutionizing the society.

However, with Moore’s law ending, we shall no longer rely on lithographics improvements increasing the transistor counts exponentially at constant power, potentially nullifying such advances. This would also be a serious crisis for the HPC industry as a whole, whereby without such performance increase there will no longer be strong incentives to replace machines every 3-5 years, significantly compromising their business and/or result in steep rise in cost for the infrastructure. We are already observing this in various metrics such as the Top500 performance increase obviously slowing down in the past several years, and the significant delay in the deployment of exascale systems.

As such, as a community, it is essential that we embark on bold research measures to look for alternative means to continue the trend of performance increase, ideally with some other parameters than transistors for compute, ideally exponentially for the coming years. I am in the process of launching several projects in the area along with the best collaborators both inside and outside Japan. I hope this will become the community research objective.

What You Do To Recharge:
By nature I have strong affinity to technology advances meeting human bravely and ingenuity to compete and advance the state-of-the-art. One hobby I have is to follow and study the global space programs, from the days of Sputnik and Apollo, to recent achievements such as the New Horizons Pluto flyby. I frequent American and Japanese space sites such as the Johnson Space Center, being awed by the majestic presence of the Saturn V LV, but the most memorable recent moment was the visit to the Cosmonaut Museum during my recent trip to Moscow, observing the Russian space history, and how they competed with the US at the time, such as Luna 3, N1 and the Buran shuttle.

Similarly, sport competition of the same nature, such as Formula One racing, is something I have followed for the last 30 years, and saw not only memorable racing moments but also the significant advances in the machine technology, especially the recent hybrid 1.6-litre Turbo cars that retained the speed but with amazing fuel efficiency. Of course most races I watch on TV but sometimes go to the circuit to enjoy the live ambience. Now if only the Grand Prix in Austin would be a week closer to SC I would be able to attend both…

In that respect Karaoke is also of the same realm where now in Japan every machine has a sophisticated scoring system to judge your singing, and I often take it as a challenge how good the machine would think of my talents 🙂

 

Horst SimonHorst D Simon

Short Bio:
Simon, an internationally recognized expert in computer science and applied mathematics, was named Berkeley Lab’s Deputy Director on September 13, 2010. Simon joined Berkeley Lab in early 1996 as Director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Before becoming Deputy Lab Director, he served as Associate Lab Director for Computing Sciences, where he helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. Simon holds an undergraduate degree in mathematics from the Technische Universtät, in Berlin, Germany, and a Ph.D. in Mathematics from the University of California, Berkeley.

Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 Gordon Bell Prize for parallel processing research. He was also a member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co- editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

HPC Challenge & Opportunity:
Let’s imagine 2030 for a moment. The one certain fact about 2030 is that all the ten people on your list will be 15 years older and probably all of us will be in retirement. The other certain fact is that growth in computer performance will have slowed down even further. We can see this slowdown already now, for example how the date when we will reach exascale has been pushed out further into the future. These exascale challenges have been discussed at lengths, and we all agree what they are, power consumption, massive parallelism etc.  But what’s important to keep in mind is that some time between 2025 and 2030 there will be a huge change. Technology will change, because CMOS based semiconductors will no longer keep up, and people will change, because a new generation born after 2000 will take over HPC leadership.

What You Do To Recharge:
I like to say that being one of the senior managers of a national lab is my job, and doing mathematics is my hobby. What recharges me is an evening the City: good food and movies, theater, ballet, or opera. For outdoors: I like bicycling, and recent started to try surfing, but only the baby waves in Linda Mar. Something that only few people know: I spent a few weeks in Kenya last summer working as volunteer in a Christian school, teaching 8th grade math – there you have it, I just can’t get away from mathematics.

 

Thomas SterlingThomas Sterling

Short Bio:
Sterling is Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and Professor of informatics and computing at Indiana University. Dr. Sterling’s current research focuses on the ParalleX advanced execution model to guide the development of future generation Exascale hardware and software computing systems as well as the HPX runtime system to enable dynamic adaptive resource management and task scheduling for significant improvements in scalability and efficiency. This research has been conducted through multiple projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Since receiving his PhD from MIT in 1984, Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, design, and operation in industry, government labs, and higher education. Dr. Sterling has received numerous awards and in 2014 was named a Fellow of the American Association for the Advancement of Science for his efforts to advance science and its applications. Well-known as the “father of Beowulf” for his research in commodity/Linux cluster computing, Dr. Sterling has conducted research in parallel computing including superconducting logic, processor in memory, asynchronous models of computing, and programming interfaces and runtime software.

HPC Challenge & Opportunity:
While many would cite power and reliability as the principal challenges facing HPC progress, I think the key factors inhibiting continued advancement are starvation (inadequate parallelism), overhead (work to manage hardware and software parallelism), latency (time for remote actions), and contention (the inverse of bandwidth and throughput). Of course there are certain classes of workload that will scale well, perhaps even to exascale. But I worry about those more tightly, irregular, time-varying, and even strong-scaled applications that are less well served by conventional practices in programming and system structures.

What You Do To Recharge:
Sailing. It’s about the only thing I do during which I do not think of work for extended time. I don’t so much get on a sailboat as put it on, and become one with the wind and waves.

 

Rick StevensRick Stevens

Short Bio:
Stevens is associate Laboratory Director of Computing, Environment, and Life Sciences at Argonne National Laboratory, helped build ANL Mathematics and Computing Science division into a leading HPC center and has been co-leading DOE planning effort for exascale computing.

HPC Challenge & Opportunity
I think there are two long term trends that offer both opportunity and research challenges.

1. HPC + Data ==> Mechanism + Learning
The first is the need to combine traditional simulation oriented HPC architectures and software stacks with data intensive architecture and software stacks. I think this will be possible through the creation of mechanisms to enable users to combine not only their own code from these two areas but entire community contributed software stacks. Containers and related technologies. A special case of this new kind of integrated applications are those that combine mechanistic models with statistical learning models. Machine learning provides a new dimension to exploit for approximate computing and begins to open up alternative models of computation that may provide means to continue scaling computational capabilities as hardware needs to evolve towards power and reliability constraints that exist in more like biological systems. Future systems that are hybrids between traditional von Neumann design points and neuromorphic devices would help accelerate this trend.

2. STEM ==> STEAM
The second is cultural. Today there is a cultural (tools, algorithms, languages, approach, etc.) divide between those that use computers to do math, science and engineering and those that use them for nearly everything else. There is a deep need to re-integrate these worlds. Those that program close to the metal with those that have little interest in performance. Those that are driven by the structure of the problem with those that are driven by the elegance of the code. Its only by reconnecting the deep “maker” inside of everyone that we will have the diverse skill base needed to truly tackle the future HPC and scientific computing challenge of the future, where its possible to understand and re-invent periodically the entire computing environment. This is what true co-design for future HPC will require.

What You Do To Recharge:
Regarding what I do to recharge. I like to travel, camp and pursue outdoor adventures. Kayaking, dog sledding, snowshoeing and canoeing are some things that I enjoy that I don’t get to do enough of. I also enjoy cooking and entertaining friends with my wife. Sometimes I like to cook with a traditional Dutch oven over an open fire. I also enjoy learning new areas of science… recently I’ve been spending down time studying cancer, neuroscience and watching Doctor Who with my daughters.

 

Pete UngaroPete Ungaro

Short Bio:
Ungaro is President and Chief Executive Officer of Cray, Inc., and has successfully continued the tradition of this distinctly American supercomputing company with each generation of Cray supercomputer, including the next-generation “Shasta” line, selected to be the flagship “pre-exascale” supercomputer at Argonne National Laboratory.

HPC Challenge & Opportunity:
While it is clear that growing power constraints are a huge technology challenge, I believe the biggest challenge is the difficulty of getting high levels of performance out of the new system architectures this energy challenge is driving. Very large numbers of heterogeneous cores in a single node, unbalanced compute to communications capabilities, expanded and very deep memory and storage hierarchies, and a requirement for huge amounts of application parallelism are making it challenging to get good performance. At Cray, we are busy at work with our R&D team experimenting with different technologies including new many-core processor architectures, ways to control the power usage at a total system and application level, programming tools that help expose parallelism, and techniques and tools to help optimize placement and automate data movement in the hierarchy. The other major focus for us right now is working on ways to integrate the traditional modeling and simulation done on supercomputers with the ever-increasing need to do analytics on all the data that is being generated.  Our vision is to bring these two areas together in a single, adaptive supercomputer that can completely abolish the need to move data between multiple systems to properly handle these two very different but challenging workloads, and handle the entire workflow on a single system. Without a doubt it is a very exciting time in the world of supercomputing!

What You Do To Recharge:
With three kids busy with school and sports, that is where most of my extra time goes. My daughter is a senior in high school and a very good volleyball player, and my twin boys are in 8th grade and between the two of them play a number of sports such as football, lacrosse, basketball and parkour. Keeping up with all of them is a job in itself, but something I really enjoy doing!  My daughter recently talked me into joining CrossFit with her.  I’ve found that it’s an amazing way to recharge and stay in shape, so I can keep up with both my kids and everything we have going on at Cray.

 

Gil WeigandGil Weigand,

Short Bio:
Weigand is the Director of Strategic Programs in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). In this position Weigand develops new initiatives that integrate, consolidate, and focus the significant gains in energy S&T and computational-science capabilities on important global challenges related to energy, the environment, healthcare, emerging urban sustainability, and national security. Weigand has served in several management positions within the Department of Energy (DOE) during the late 1990s and received the Secretary of Energy Gold Medal in 1996.

HPC Challenge & Opportunity:
I will leave discussions of the computer system challenges to my computer engineering friends… Let me speak for a minute though about an exciting and critically important application challenge that requires all of the leadership HPC we can muster: healthcare. Today the focus is treatment and we are a compassionate country and fund significantly this focus. The key however to truly living longer, and I mean substantially longer, is understanding prevention. To this end there are models of lifespan that include a significant coupling and feedback among the exosome and the genomics and other omics. This coupling is well known but not well understood. The technical challenge is its understanding and modeling so that lifespan simulations can be created to study the appropriate bio-feedback and delivery systems needed to drive the innovation and discovery in behavioral, pharma, clinical process, device, and other healthcare area. This can only be done with large scale HPC. The payoff is enormous for our society…not only from a compassionate point of view but also an economic one too.  A healthier population is a more productive population.

What You Do To Recharge:
I am an avid hiker… I live in the Cherokee National Forest, not far from Knoxville, TN. In my part of the world, it is possible to hike, simply by walking out my front door.  Or, you could drive between 15 to 30 minutes and literally be at the beginnings of hundreds of trailheads in the Pisgah and Cherokee National Forest or Great Smoky Mountains National Park.

 

Thomas ZachariaThomas Zacharia,

Short Bio:
Zacharia is Deputy Director for Science and Technology at Oak Ridge National Laboratory and a main driver behind Leadership Computing and in building ORNL’s scientific computing program and recently returned from Qatar where he initiated a national research program and construction of world class scientific infrastructure to support the Qatar National Research Strategy.

HPC Challenge & Opportunity:
The biggest technology challenge that I see is the dual one of creating a balanced and easily programmable machine.
Balance is principally a hardware problem.  As “apex machines” exascale computers will be few in number – but will need to serve the same applications spectrum as lower tier machines. Thus, there will be a premium on exascale architectures that can be balanced (perhaps dynamically) for optimal performance on a broad range of applications and their various algorithms.

Programmability is essentially a software problem.  NSCI teams will be developing machines that cope with the increasingly visible limits of CMOS while exhibiting ever higher LINPACK benchmark results. These successes will come at the expense of programmability. The more difficult to program a computer becomes, the less accessible it is to its potential user base. To prevent “apex machines” from becoming uncoupled from the broader spectrum of users and their various (current and future) applications, extraordinary efforts must be made to develop software that will shield users from arcane details of exascale machine architectures while maintaining very high levels of application performance.

The balance part of the technology challenge can be overcome by closely coupling machine architecture development with a deep understanding of applications algorithm performance. This need is already understood under the label “codesign”.
The programmability part of the challenge is another matter. If the history of high performance computing is an indicator, progress here is concept limited.

What You Do To Recharge:
I enjoy escaping to our small farm by the Tennessee River to enjoy the peace and quiet and to spend time with family and friends out on the lake.  I also enjoy working on my 1976 Jaguar XJ12 Coupe. The XJ12 coupes are beautiful but affordable cars that are rare.  Maintaining is a whole another matter!  The XJ12C also brings fond memories of Jaguar, the fastest Supercomputer in the world back in 2009-2010.

More live coverage of SC15.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the global stage. Now, the Mohammed VI Polytechnic University (U Read more…

By Oliver Peckham

Supercomputer-Powered Machine Learning Supports Fusion Energy Reactor Design

February 25, 2021

Energy researchers have been reaching for the stars for decades in their attempt to artificially recreate a stable fusion energy reactor. If successful, such a reactor would revolutionize the world’s energy supply over Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing system, called "Wisteria/BDEC-01," that will tackle simulati Read more…

By Tiffany Trader

President Biden Signs Executive Order to Review Chip, Other Supply Chains

February 24, 2021

U.S. President Biden signed an executive order late today calling for a 100-day review of key supply chains including semiconductors, large capacity batteries, pharmaceuticals, and rare-earth elements. The scarcity of ch Read more…

By John Russell

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

AWS Solution Channel

Introducing AWS HPC Tech Shorts

Amazon Web Services (AWS) is excited to announce a new videos series focused on running HPC workloads on AWS. This new video series will cover HPC workloads from genomics, computational chemistry, to computational fluid dynamics (CFD) and more. Read more…

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…

By Tiffany Trader

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

IBM’s Prototype Low-Power 7nm AI Chip Offers ‘Precision Scaling’

February 23, 2021

IBM has released details of a prototype AI chip geared toward low-precision training and inference across different AI model types while retaining model quality within AI applications. In a paper delivered during this year’s International Solid-State Circuits Virtual Conference, IBM... Read more…

By George Leopold

IBM Continues Mainstreaming Power Systems and Integrating Red Hat in Pivot to Cloud

February 23, 2021

As IBM continues its massive pivot to the cloud, its Power-microprocessor-based products are being mainstreamed and realigned with the corporate-wide strategy. Read more…

By John Russell

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

By Tiffany Trader

ENIAC at 75: Celebrating the World’s First Supercomputer

February 15, 2021

With little fanfare, today’s computer revolution was arguably born and announced through a small, innocuous, two-column story at the bottom of the front page of The New York Times on Feb. 15, 1946. In that story and others, the previously classified project, ENIAC... Read more…

By Todd R. Weiss

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

By Todd R. Weiss

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Intel Teases Ice Lake-SP, Shows Competitive Benchmarking

November 17, 2020

At SC20 this week, Intel teased its forthcoming third-generation Xeon "Ice Lake-SP" server processor, claiming competitive benchmarking results against AMD's second-generation Epyc "Rome" processor. Ice Lake-SP, Intel's first server processor with 10nm technology... Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

It’s Fugaku vs. COVID-19: How the World’s Top Supercomputer Is Shaping Our New Normal

November 9, 2020

Fugaku is currently the most powerful publicly ranked supercomputer in the world – but we weren’t supposed to have it yet. The supercomputer, situated at Japan’s Riken scientific research institute, was scheduled to come online in 2021. When the pandemic struck... Read more…

By Oliver Peckham

MIT Makes a Big Breakthrough in Nonsilicon Transistors

December 10, 2020

What if Silicon Valley moved beyond silicon? In the 80’s, Seymour Cray was asking the same question, delivering at Supercomputing 1988 a talk titled “What’s All This About Gallium Arsenide?” The supercomputing legend intended to make gallium arsenide (GaA) the material of the future... Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire