HPCwire Debuts Outstanding Leadership Awards at SC15

By John Russell

November 16, 2015

This year HPCwire has established a new category within its Annual Readers and Editors Choice Awards program to recognize Outstanding Leadership in HPC. We realize there is no single preeminent leader in HPC and that’s a good thing. The diversity of opinion and expertise is a major driver of progress. There are many individuals whose accomplishment and influence within HPC (and beyond) represent important leadership and are deserving of recognition.

In that vein, we think the inaugural group of nominees well represents the impressive work and achievement that everyone in the HPC community aspires to. The group encompasses a wide range of disciplines and roles, all of which are necessary to advance HPC and its impact on society. So while HPCwire readers and editors have already selected “winners” – you’ll have to discover them elsewhere – it’s entirely appropriate to shine a spotlight on all of the nominees.

The 2015 HPCwire Outstanding Leadership nominees include:

  • Jack Dongarra, University of Tennessee
  • Patricia K. Falcone, Lawrence Livermore National Laboratory
  • Rajeeb Hazra, Intel
  • Satoshi Matsuoka, Tokyo Institute of Technology
  • Horst Simon, Lawrence Berkeley National Laboratory
  • Thomas Sterling, Indiana University
  • Rick Stevens, Argonne National Laboratory
  • Pete Ungaro, Cray
  • Gil Weigand, Oak Ridge National Laboratory
  • Thomas Zacharia, Oak Ridge National Laboratory

These are of course familiar names within the HPC community. HPCwire asked each of the nominees to submit a short bio and to answer two questions: 1) Within your domain of expertise, what do you see as the biggest technology challenge facing HPC progress and how is that likely to be overcome? 2) Tell us something that few know about you with regard to your interests and what recharges you outside of work.

Their answers, which we present here, are as diverse as the group – who knew Satoshi Matsuoka was a Karaoke fan or that Thomas Zacharia has a passion for his vintage Jaguar coupe or that Horst Simon recently took up surfing! – Enjoy.

Jack DongarraJack Dongarra

Short Bio:
Dongarra is a University Distinguished Professor in the Electrical Engineering and Computer Science Department at the University of Tennessee and researcher at Oak Ridge National Lab. He is author of the LINPACK benchmark and co-author of the TOP500 list and the High Performance Conjugate Gradient Benchmark (HPCG), Dongarra as been a champion of the need for algorithms, numerical libraries, and software for HPC, especially at extreme scale, and has contributed to many of the numerical libraries widely used in HPC.

HPC Challenge & Opportunity:
While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision. We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.

What You Do To Recharge:
Outside of my research into HPC I enjoy photography and watching and interacting with our two grandchildren.


Patricia K. FalconePatricia K. Falcone

Short Bio:
Falcone is the Deputy Director for Science and Technology at the Lawrence Livermore National Laboratory (LLNL) in Livermore, California. She is the principal advocate for the Laboratory’s science and technology base and oversees the strategic development of the lab’s capabilities. A member of the senior management team, she is responsible for the lab’s collaborative research with academia and the private sector, as well as its internal investment portfolio, including Laboratory Directed Research and Development.

HPC Challenge & Opportunity:
In my view, the biggest challenge facing HPC progress is unlocking the creativity and innovation of talented folks across multiple domains including industry, academia, and laboratories and research institutes, in an integrated manner. There are both big challenges and big opportunities, but neither the challenges will be met nor the opportunities realized without focused efforts to push boundaries as well as targeted collaborations that yield benefits in myriad application spaces. Also necessary is bringing along talent and interest among young scholars, benefiting from creative research and technology disruptions, and working together to achieve ever increasing performance and enhanced impacts for scientific discovery, national security, and economic security.

What You Do To Recharge:
Personally, outside of work I enjoy family and community activities, as well as reading and the arts.


Rajeeb HazraRajeeb Hazra

Short Bio:
Hazra is Vice President, Data Center Group & General Manager of Enterprise and HPC Platforms Group is responsible for all technical computing across high-performance computing and workstations. Hazra has said that this new world of “HPC Everywhere” will require unprecedented innovation, which ties directly into Hazra’s driving of Intel’s code modernization efforts and investment in its Intel Parallel Computing Centers Program.

HPC Challenge & Opportunity:
Let me approach this question from a different angle. You’ve asked what I see as the biggest technology challenge facing HPC progress, but the barrier – the hurdle – is much bigger than any one technology.  Traditional HPC is evolving into a new era with computational and data analytics capabilities we’ve never come close to experiencing before. The biggest hurdle we face in driving HPC progress is one of rethinking our approaches to problem solving and educating and training the community at large to better understand how to use the evolving HPC platforms and take full advantage of unprecedented levels of parallel performance.

With that being said, the technical challenge is to re-architect HPC systems in such a way as to achieve significantly reduced latency, orders of magnitude performance improvement in bandwidth, and deliver balanced systems that can accommodate both compute- and data-intensive workloads on the same platform.  HPC elements such as fabric, memory, storage, continue to evolve and get better every year.  But architecting future HPC systems to be able to integrate the latest elements as they become available is a new direction.  With the right system framework and the latest innovations in processors, memory, fabric, file systems and a new HPC software stack, we are setting the stage for an extended period of rapid scientific discovery and tremendous commercial innovation that will change the landscape of private industry, academia and government research.

We also believe HPC progress will be shaped by the field of machine learning, an area we have been researching at Intel labs for several years.  You will be hearing a lot more about machine learning throughout the rest of this decade and Intel is fully committed to driving leadership in this exciting area.

Most people are starting to recognize that Intel has evolved our HPC business over the years to be so much more than just a processor company. Everything I’ve mentioned in this discussion refers to foundational elements of the Intel® Scalable System Framework, our advanced architectural approach for designing HPC systems with the performance capabilities necessary to serve a wide range of workloads such as traditional HPC, Big Data, visualization and machine learning.

This holistic, architectural approach is indeed the industry-changing technical challenge but one that we are well on the way to solving with our Intel® Scalable System Framework.

What You Do To Recharge:
I think like most people in this industry, I deeply value the time I get to spend with my family. I travel a great amount, so my down time is usually not scripted. I like to explore various interests and it’s often something spontaneous that sparks my passion at any given time. I thoroughly enjoy photography and find it both relaxing and stimulating. One thing most people wouldn’t know about me is my passion for music. The right music can lift your spirits and change your perspective. I enjoy listening to emerging music artists from around the world, and I appreciate all types of music but particularly fusion.


Satoshi MatsuokaSatoshi Matsuoka

Short Bio:
Matsuoka is a professor at Tokyo Institute of Technology (TITech) and a leader of the TSUBAME series of supercomputers. Matsuoka built TITech into an HPC leader, adopting GPGPUs and other energy efficient techniques early on and helping put Japan on a course for productive exascale science. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, the ACM Gordon Bell Prize for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.

HPC Challenge & Opportunity:
Over the long term the biggest challenge is the inevitable end of Moore’s law. HPC has been the leader in accelerating computing for the past decades, whereupon we have seen x1000 increase in performance every 10 years. Such an exponential trend has allowed innovative, qualitative, and in fact revolutionary changes in the science and engineering enabled by HPC, as well as normal computing where devices such as smartphones which would have been science fiction items to become a ubiquitous reality, again revolutionizing the society.

However, with Moore’s law ending, we shall no longer rely on lithographics improvements increasing the transistor counts exponentially at constant power, potentially nullifying such advances. This would also be a serious crisis for the HPC industry as a whole, whereby without such performance increase there will no longer be strong incentives to replace machines every 3-5 years, significantly compromising their business and/or result in steep rise in cost for the infrastructure. We are already observing this in various metrics such as the Top500 performance increase obviously slowing down in the past several years, and the significant delay in the deployment of exascale systems.

As such, as a community, it is essential that we embark on bold research measures to look for alternative means to continue the trend of performance increase, ideally with some other parameters than transistors for compute, ideally exponentially for the coming years. I am in the process of launching several projects in the area along with the best collaborators both inside and outside Japan. I hope this will become the community research objective.

What You Do To Recharge:
By nature I have strong affinity to technology advances meeting human bravely and ingenuity to compete and advance the state-of-the-art. One hobby I have is to follow and study the global space programs, from the days of Sputnik and Apollo, to recent achievements such as the New Horizons Pluto flyby. I frequent American and Japanese space sites such as the Johnson Space Center, being awed by the majestic presence of the Saturn V LV, but the most memorable recent moment was the visit to the Cosmonaut Museum during my recent trip to Moscow, observing the Russian space history, and how they competed with the US at the time, such as Luna 3, N1 and the Buran shuttle.

Similarly, sport competition of the same nature, such as Formula One racing, is something I have followed for the last 30 years, and saw not only memorable racing moments but also the significant advances in the machine technology, especially the recent hybrid 1.6-litre Turbo cars that retained the speed but with amazing fuel efficiency. Of course most races I watch on TV but sometimes go to the circuit to enjoy the live ambience. Now if only the Grand Prix in Austin would be a week closer to SC I would be able to attend both…

In that respect Karaoke is also of the same realm where now in Japan every machine has a sophisticated scoring system to judge your singing, and I often take it as a challenge how good the machine would think of my talents 🙂


Horst SimonHorst D Simon

Short Bio:
Simon, an internationally recognized expert in computer science and applied mathematics, was named Berkeley Lab’s Deputy Director on September 13, 2010. Simon joined Berkeley Lab in early 1996 as Director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Before becoming Deputy Lab Director, he served as Associate Lab Director for Computing Sciences, where he helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. Simon holds an undergraduate degree in mathematics from the Technische Universtät, in Berlin, Germany, and a Ph.D. in Mathematics from the University of California, Berkeley.

Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 Gordon Bell Prize for parallel processing research. He was also a member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co- editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

HPC Challenge & Opportunity:
Let’s imagine 2030 for a moment. The one certain fact about 2030 is that all the ten people on your list will be 15 years older and probably all of us will be in retirement. The other certain fact is that growth in computer performance will have slowed down even further. We can see this slowdown already now, for example how the date when we will reach exascale has been pushed out further into the future. These exascale challenges have been discussed at lengths, and we all agree what they are, power consumption, massive parallelism etc.  But what’s important to keep in mind is that some time between 2025 and 2030 there will be a huge change. Technology will change, because CMOS based semiconductors will no longer keep up, and people will change, because a new generation born after 2000 will take over HPC leadership.

What You Do To Recharge:
I like to say that being one of the senior managers of a national lab is my job, and doing mathematics is my hobby. What recharges me is an evening the City: good food and movies, theater, ballet, or opera. For outdoors: I like bicycling, and recent started to try surfing, but only the baby waves in Linda Mar. Something that only few people know: I spent a few weeks in Kenya last summer working as volunteer in a Christian school, teaching 8th grade math – there you have it, I just can’t get away from mathematics.


Thomas SterlingThomas Sterling

Short Bio:
Sterling is Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and Professor of informatics and computing at Indiana University. Dr. Sterling’s current research focuses on the ParalleX advanced execution model to guide the development of future generation Exascale hardware and software computing systems as well as the HPX runtime system to enable dynamic adaptive resource management and task scheduling for significant improvements in scalability and efficiency. This research has been conducted through multiple projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Since receiving his PhD from MIT in 1984, Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, design, and operation in industry, government labs, and higher education. Dr. Sterling has received numerous awards and in 2014 was named a Fellow of the American Association for the Advancement of Science for his efforts to advance science and its applications. Well-known as the “father of Beowulf” for his research in commodity/Linux cluster computing, Dr. Sterling has conducted research in parallel computing including superconducting logic, processor in memory, asynchronous models of computing, and programming interfaces and runtime software.

HPC Challenge & Opportunity:
While many would cite power and reliability as the principal challenges facing HPC progress, I think the key factors inhibiting continued advancement are starvation (inadequate parallelism), overhead (work to manage hardware and software parallelism), latency (time for remote actions), and contention (the inverse of bandwidth and throughput). Of course there are certain classes of workload that will scale well, perhaps even to exascale. But I worry about those more tightly, irregular, time-varying, and even strong-scaled applications that are less well served by conventional practices in programming and system structures.

What You Do To Recharge:
Sailing. It’s about the only thing I do during which I do not think of work for extended time. I don’t so much get on a sailboat as put it on, and become one with the wind and waves.


Rick StevensRick Stevens

Short Bio:
Stevens is associate Laboratory Director of Computing, Environment, and Life Sciences at Argonne National Laboratory, helped build ANL Mathematics and Computing Science division into a leading HPC center and has been co-leading DOE planning effort for exascale computing.

HPC Challenge & Opportunity
I think there are two long term trends that offer both opportunity and research challenges.

1. HPC + Data ==> Mechanism + Learning
The first is the need to combine traditional simulation oriented HPC architectures and software stacks with data intensive architecture and software stacks. I think this will be possible through the creation of mechanisms to enable users to combine not only their own code from these two areas but entire community contributed software stacks. Containers and related technologies. A special case of this new kind of integrated applications are those that combine mechanistic models with statistical learning models. Machine learning provides a new dimension to exploit for approximate computing and begins to open up alternative models of computation that may provide means to continue scaling computational capabilities as hardware needs to evolve towards power and reliability constraints that exist in more like biological systems. Future systems that are hybrids between traditional von Neumann design points and neuromorphic devices would help accelerate this trend.

The second is cultural. Today there is a cultural (tools, algorithms, languages, approach, etc.) divide between those that use computers to do math, science and engineering and those that use them for nearly everything else. There is a deep need to re-integrate these worlds. Those that program close to the metal with those that have little interest in performance. Those that are driven by the structure of the problem with those that are driven by the elegance of the code. Its only by reconnecting the deep “maker” inside of everyone that we will have the diverse skill base needed to truly tackle the future HPC and scientific computing challenge of the future, where its possible to understand and re-invent periodically the entire computing environment. This is what true co-design for future HPC will require.

What You Do To Recharge:
Regarding what I do to recharge. I like to travel, camp and pursue outdoor adventures. Kayaking, dog sledding, snowshoeing and canoeing are some things that I enjoy that I don’t get to do enough of. I also enjoy cooking and entertaining friends with my wife. Sometimes I like to cook with a traditional Dutch oven over an open fire. I also enjoy learning new areas of science… recently I’ve been spending down time studying cancer, neuroscience and watching Doctor Who with my daughters.


Pete UngaroPete Ungaro

Short Bio:
Ungaro is President and Chief Executive Officer of Cray, Inc., and has successfully continued the tradition of this distinctly American supercomputing company with each generation of Cray supercomputer, including the next-generation “Shasta” line, selected to be the flagship “pre-exascale” supercomputer at Argonne National Laboratory.

HPC Challenge & Opportunity:
While it is clear that growing power constraints are a huge technology challenge, I believe the biggest challenge is the difficulty of getting high levels of performance out of the new system architectures this energy challenge is driving. Very large numbers of heterogeneous cores in a single node, unbalanced compute to communications capabilities, expanded and very deep memory and storage hierarchies, and a requirement for huge amounts of application parallelism are making it challenging to get good performance. At Cray, we are busy at work with our R&D team experimenting with different technologies including new many-core processor architectures, ways to control the power usage at a total system and application level, programming tools that help expose parallelism, and techniques and tools to help optimize placement and automate data movement in the hierarchy. The other major focus for us right now is working on ways to integrate the traditional modeling and simulation done on supercomputers with the ever-increasing need to do analytics on all the data that is being generated.  Our vision is to bring these two areas together in a single, adaptive supercomputer that can completely abolish the need to move data between multiple systems to properly handle these two very different but challenging workloads, and handle the entire workflow on a single system. Without a doubt it is a very exciting time in the world of supercomputing!

What You Do To Recharge:
With three kids busy with school and sports, that is where most of my extra time goes. My daughter is a senior in high school and a very good volleyball player, and my twin boys are in 8th grade and between the two of them play a number of sports such as football, lacrosse, basketball and parkour. Keeping up with all of them is a job in itself, but something I really enjoy doing!  My daughter recently talked me into joining CrossFit with her.  I’ve found that it’s an amazing way to recharge and stay in shape, so I can keep up with both my kids and everything we have going on at Cray.


Gil WeigandGil Weigand,

Short Bio:
Weigand is the Director of Strategic Programs in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). In this position Weigand develops new initiatives that integrate, consolidate, and focus the significant gains in energy S&T and computational-science capabilities on important global challenges related to energy, the environment, healthcare, emerging urban sustainability, and national security. Weigand has served in several management positions within the Department of Energy (DOE) during the late 1990s and received the Secretary of Energy Gold Medal in 1996.

HPC Challenge & Opportunity:
I will leave discussions of the computer system challenges to my computer engineering friends… Let me speak for a minute though about an exciting and critically important application challenge that requires all of the leadership HPC we can muster: healthcare. Today the focus is treatment and we are a compassionate country and fund significantly this focus. The key however to truly living longer, and I mean substantially longer, is understanding prevention. To this end there are models of lifespan that include a significant coupling and feedback among the exosome and the genomics and other omics. This coupling is well known but not well understood. The technical challenge is its understanding and modeling so that lifespan simulations can be created to study the appropriate bio-feedback and delivery systems needed to drive the innovation and discovery in behavioral, pharma, clinical process, device, and other healthcare area. This can only be done with large scale HPC. The payoff is enormous for our society…not only from a compassionate point of view but also an economic one too.  A healthier population is a more productive population.

What You Do To Recharge:
I am an avid hiker… I live in the Cherokee National Forest, not far from Knoxville, TN. In my part of the world, it is possible to hike, simply by walking out my front door.  Or, you could drive between 15 to 30 minutes and literally be at the beginnings of hundreds of trailheads in the Pisgah and Cherokee National Forest or Great Smoky Mountains National Park.


Thomas ZachariaThomas Zacharia,

Short Bio:
Zacharia is Deputy Director for Science and Technology at Oak Ridge National Laboratory and a main driver behind Leadership Computing and in building ORNL’s scientific computing program and recently returned from Qatar where he initiated a national research program and construction of world class scientific infrastructure to support the Qatar National Research Strategy.

HPC Challenge & Opportunity:
The biggest technology challenge that I see is the dual one of creating a balanced and easily programmable machine.
Balance is principally a hardware problem.  As “apex machines” exascale computers will be few in number – but will need to serve the same applications spectrum as lower tier machines. Thus, there will be a premium on exascale architectures that can be balanced (perhaps dynamically) for optimal performance on a broad range of applications and their various algorithms.

Programmability is essentially a software problem.  NSCI teams will be developing machines that cope with the increasingly visible limits of CMOS while exhibiting ever higher LINPACK benchmark results. These successes will come at the expense of programmability. The more difficult to program a computer becomes, the less accessible it is to its potential user base. To prevent “apex machines” from becoming uncoupled from the broader spectrum of users and their various (current and future) applications, extraordinary efforts must be made to develop software that will shield users from arcane details of exascale machine architectures while maintaining very high levels of application performance.

The balance part of the technology challenge can be overcome by closely coupling machine architecture development with a deep understanding of applications algorithm performance. This need is already understood under the label “codesign”.
The programmability part of the challenge is another matter. If the history of high performance computing is an indicator, progress here is concept limited.

What You Do To Recharge:
I enjoy escaping to our small farm by the Tennessee River to enjoy the peace and quiet and to spend time with family and friends out on the lake.  I also enjoy working on my 1976 Jaguar XJ12 Coupe. The XJ12 coupes are beautiful but affordable cars that are rare.  Maintaining is a whole another matter!  The XJ12C also brings fond memories of Jaguar, the fastest Supercomputer in the world back in 2009-2010.

More live coverage of SC15.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers


Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow