HPCwire Debuts Outstanding Leadership Awards at SC15

By John Russell

November 16, 2015

This year HPCwire has established a new category within its Annual Readers and Editors Choice Awards program to recognize Outstanding Leadership in HPC. We realize there is no single preeminent leader in HPC and that’s a good thing. The diversity of opinion and expertise is a major driver of progress. There are many individuals whose accomplishment and influence within HPC (and beyond) represent important leadership and are deserving of recognition.

In that vein, we think the inaugural group of nominees well represents the impressive work and achievement that everyone in the HPC community aspires to. The group encompasses a wide range of disciplines and roles, all of which are necessary to advance HPC and its impact on society. So while HPCwire readers and editors have already selected “winners” – you’ll have to discover them elsewhere – it’s entirely appropriate to shine a spotlight on all of the nominees.

The 2015 HPCwire Outstanding Leadership nominees include:

  • Jack Dongarra, University of Tennessee
  • Patricia K. Falcone, Lawrence Livermore National Laboratory
  • Rajeeb Hazra, Intel
  • Satoshi Matsuoka, Tokyo Institute of Technology
  • Horst Simon, Lawrence Berkeley National Laboratory
  • Thomas Sterling, Indiana University
  • Rick Stevens, Argonne National Laboratory
  • Pete Ungaro, Cray
  • Gil Weigand, Oak Ridge National Laboratory
  • Thomas Zacharia, Oak Ridge National Laboratory

These are of course familiar names within the HPC community. HPCwire asked each of the nominees to submit a short bio and to answer two questions: 1) Within your domain of expertise, what do you see as the biggest technology challenge facing HPC progress and how is that likely to be overcome? 2) Tell us something that few know about you with regard to your interests and what recharges you outside of work.

Their answers, which we present here, are as diverse as the group – who knew Satoshi Matsuoka was a Karaoke fan or that Thomas Zacharia has a passion for his vintage Jaguar coupe or that Horst Simon recently took up surfing! – Enjoy.

Jack DongarraJack Dongarra

Short Bio:
Dongarra is a University Distinguished Professor in the Electrical Engineering and Computer Science Department at the University of Tennessee and researcher at Oak Ridge National Lab. He is author of the LINPACK benchmark and co-author of the TOP500 list and the High Performance Conjugate Gradient Benchmark (HPCG), Dongarra as been a champion of the need for algorithms, numerical libraries, and software for HPC, especially at extreme scale, and has contributed to many of the numerical libraries widely used in HPC.

HPC Challenge & Opportunity:
While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision. We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.

What You Do To Recharge:
Outside of my research into HPC I enjoy photography and watching and interacting with our two grandchildren.

 

Patricia K. FalconePatricia K. Falcone

Short Bio:
Falcone is the Deputy Director for Science and Technology at the Lawrence Livermore National Laboratory (LLNL) in Livermore, California. She is the principal advocate for the Laboratory’s science and technology base and oversees the strategic development of the lab’s capabilities. A member of the senior management team, she is responsible for the lab’s collaborative research with academia and the private sector, as well as its internal investment portfolio, including Laboratory Directed Research and Development.

HPC Challenge & Opportunity:
In my view, the biggest challenge facing HPC progress is unlocking the creativity and innovation of talented folks across multiple domains including industry, academia, and laboratories and research institutes, in an integrated manner. There are both big challenges and big opportunities, but neither the challenges will be met nor the opportunities realized without focused efforts to push boundaries as well as targeted collaborations that yield benefits in myriad application spaces. Also necessary is bringing along talent and interest among young scholars, benefiting from creative research and technology disruptions, and working together to achieve ever increasing performance and enhanced impacts for scientific discovery, national security, and economic security.

What You Do To Recharge:
Personally, outside of work I enjoy family and community activities, as well as reading and the arts.

 

Rajeeb HazraRajeeb Hazra

Short Bio:
Hazra is Vice President, Data Center Group & General Manager of Enterprise and HPC Platforms Group is responsible for all technical computing across high-performance computing and workstations. Hazra has said that this new world of “HPC Everywhere” will require unprecedented innovation, which ties directly into Hazra’s driving of Intel’s code modernization efforts and investment in its Intel Parallel Computing Centers Program.

HPC Challenge & Opportunity:
Let me approach this question from a different angle. You’ve asked what I see as the biggest technology challenge facing HPC progress, but the barrier – the hurdle – is much bigger than any one technology.  Traditional HPC is evolving into a new era with computational and data analytics capabilities we’ve never come close to experiencing before. The biggest hurdle we face in driving HPC progress is one of rethinking our approaches to problem solving and educating and training the community at large to better understand how to use the evolving HPC platforms and take full advantage of unprecedented levels of parallel performance.

With that being said, the technical challenge is to re-architect HPC systems in such a way as to achieve significantly reduced latency, orders of magnitude performance improvement in bandwidth, and deliver balanced systems that can accommodate both compute- and data-intensive workloads on the same platform.  HPC elements such as fabric, memory, storage, continue to evolve and get better every year.  But architecting future HPC systems to be able to integrate the latest elements as they become available is a new direction.  With the right system framework and the latest innovations in processors, memory, fabric, file systems and a new HPC software stack, we are setting the stage for an extended period of rapid scientific discovery and tremendous commercial innovation that will change the landscape of private industry, academia and government research.

We also believe HPC progress will be shaped by the field of machine learning, an area we have been researching at Intel labs for several years.  You will be hearing a lot more about machine learning throughout the rest of this decade and Intel is fully committed to driving leadership in this exciting area.

Most people are starting to recognize that Intel has evolved our HPC business over the years to be so much more than just a processor company. Everything I’ve mentioned in this discussion refers to foundational elements of the Intel® Scalable System Framework, our advanced architectural approach for designing HPC systems with the performance capabilities necessary to serve a wide range of workloads such as traditional HPC, Big Data, visualization and machine learning.

This holistic, architectural approach is indeed the industry-changing technical challenge but one that we are well on the way to solving with our Intel® Scalable System Framework.

What You Do To Recharge:
I think like most people in this industry, I deeply value the time I get to spend with my family. I travel a great amount, so my down time is usually not scripted. I like to explore various interests and it’s often something spontaneous that sparks my passion at any given time. I thoroughly enjoy photography and find it both relaxing and stimulating. One thing most people wouldn’t know about me is my passion for music. The right music can lift your spirits and change your perspective. I enjoy listening to emerging music artists from around the world, and I appreciate all types of music but particularly fusion.

 

Satoshi MatsuokaSatoshi Matsuoka

Short Bio:
Matsuoka is a professor at Tokyo Institute of Technology (TITech) and a leader of the TSUBAME series of supercomputers. Matsuoka built TITech into an HPC leader, adopting GPGPUs and other energy efficient techniques early on and helping put Japan on a course for productive exascale science. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, the ACM Gordon Bell Prize for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.

HPC Challenge & Opportunity:
Over the long term the biggest challenge is the inevitable end of Moore’s law. HPC has been the leader in accelerating computing for the past decades, whereupon we have seen x1000 increase in performance every 10 years. Such an exponential trend has allowed innovative, qualitative, and in fact revolutionary changes in the science and engineering enabled by HPC, as well as normal computing where devices such as smartphones which would have been science fiction items to become a ubiquitous reality, again revolutionizing the society.

However, with Moore’s law ending, we shall no longer rely on lithographics improvements increasing the transistor counts exponentially at constant power, potentially nullifying such advances. This would also be a serious crisis for the HPC industry as a whole, whereby without such performance increase there will no longer be strong incentives to replace machines every 3-5 years, significantly compromising their business and/or result in steep rise in cost for the infrastructure. We are already observing this in various metrics such as the Top500 performance increase obviously slowing down in the past several years, and the significant delay in the deployment of exascale systems.

As such, as a community, it is essential that we embark on bold research measures to look for alternative means to continue the trend of performance increase, ideally with some other parameters than transistors for compute, ideally exponentially for the coming years. I am in the process of launching several projects in the area along with the best collaborators both inside and outside Japan. I hope this will become the community research objective.

What You Do To Recharge:
By nature I have strong affinity to technology advances meeting human bravely and ingenuity to compete and advance the state-of-the-art. One hobby I have is to follow and study the global space programs, from the days of Sputnik and Apollo, to recent achievements such as the New Horizons Pluto flyby. I frequent American and Japanese space sites such as the Johnson Space Center, being awed by the majestic presence of the Saturn V LV, but the most memorable recent moment was the visit to the Cosmonaut Museum during my recent trip to Moscow, observing the Russian space history, and how they competed with the US at the time, such as Luna 3, N1 and the Buran shuttle.

Similarly, sport competition of the same nature, such as Formula One racing, is something I have followed for the last 30 years, and saw not only memorable racing moments but also the significant advances in the machine technology, especially the recent hybrid 1.6-litre Turbo cars that retained the speed but with amazing fuel efficiency. Of course most races I watch on TV but sometimes go to the circuit to enjoy the live ambience. Now if only the Grand Prix in Austin would be a week closer to SC I would be able to attend both…

In that respect Karaoke is also of the same realm where now in Japan every machine has a sophisticated scoring system to judge your singing, and I often take it as a challenge how good the machine would think of my talents 🙂

 

Horst SimonHorst D Simon

Short Bio:
Simon, an internationally recognized expert in computer science and applied mathematics, was named Berkeley Lab’s Deputy Director on September 13, 2010. Simon joined Berkeley Lab in early 1996 as Director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Before becoming Deputy Lab Director, he served as Associate Lab Director for Computing Sciences, where he helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. Simon holds an undergraduate degree in mathematics from the Technische Universtät, in Berlin, Germany, and a Ph.D. in Mathematics from the University of California, Berkeley.

Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 Gordon Bell Prize for parallel processing research. He was also a member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co- editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

HPC Challenge & Opportunity:
Let’s imagine 2030 for a moment. The one certain fact about 2030 is that all the ten people on your list will be 15 years older and probably all of us will be in retirement. The other certain fact is that growth in computer performance will have slowed down even further. We can see this slowdown already now, for example how the date when we will reach exascale has been pushed out further into the future. These exascale challenges have been discussed at lengths, and we all agree what they are, power consumption, massive parallelism etc.  But what’s important to keep in mind is that some time between 2025 and 2030 there will be a huge change. Technology will change, because CMOS based semiconductors will no longer keep up, and people will change, because a new generation born after 2000 will take over HPC leadership.

What You Do To Recharge:
I like to say that being one of the senior managers of a national lab is my job, and doing mathematics is my hobby. What recharges me is an evening the City: good food and movies, theater, ballet, or opera. For outdoors: I like bicycling, and recent started to try surfing, but only the baby waves in Linda Mar. Something that only few people know: I spent a few weeks in Kenya last summer working as volunteer in a Christian school, teaching 8th grade math – there you have it, I just can’t get away from mathematics.

 

Thomas SterlingThomas Sterling

Short Bio:
Sterling is Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and Professor of informatics and computing at Indiana University. Dr. Sterling’s current research focuses on the ParalleX advanced execution model to guide the development of future generation Exascale hardware and software computing systems as well as the HPX runtime system to enable dynamic adaptive resource management and task scheduling for significant improvements in scalability and efficiency. This research has been conducted through multiple projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Since receiving his PhD from MIT in 1984, Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, design, and operation in industry, government labs, and higher education. Dr. Sterling has received numerous awards and in 2014 was named a Fellow of the American Association for the Advancement of Science for his efforts to advance science and its applications. Well-known as the “father of Beowulf” for his research in commodity/Linux cluster computing, Dr. Sterling has conducted research in parallel computing including superconducting logic, processor in memory, asynchronous models of computing, and programming interfaces and runtime software.

HPC Challenge & Opportunity:
While many would cite power and reliability as the principal challenges facing HPC progress, I think the key factors inhibiting continued advancement are starvation (inadequate parallelism), overhead (work to manage hardware and software parallelism), latency (time for remote actions), and contention (the inverse of bandwidth and throughput). Of course there are certain classes of workload that will scale well, perhaps even to exascale. But I worry about those more tightly, irregular, time-varying, and even strong-scaled applications that are less well served by conventional practices in programming and system structures.

What You Do To Recharge:
Sailing. It’s about the only thing I do during which I do not think of work for extended time. I don’t so much get on a sailboat as put it on, and become one with the wind and waves.

 

Rick StevensRick Stevens

Short Bio:
Stevens is associate Laboratory Director of Computing, Environment, and Life Sciences at Argonne National Laboratory, helped build ANL Mathematics and Computing Science division into a leading HPC center and has been co-leading DOE planning effort for exascale computing.

HPC Challenge & Opportunity
I think there are two long term trends that offer both opportunity and research challenges.

1. HPC + Data ==> Mechanism + Learning
The first is the need to combine traditional simulation oriented HPC architectures and software stacks with data intensive architecture and software stacks. I think this will be possible through the creation of mechanisms to enable users to combine not only their own code from these two areas but entire community contributed software stacks. Containers and related technologies. A special case of this new kind of integrated applications are those that combine mechanistic models with statistical learning models. Machine learning provides a new dimension to exploit for approximate computing and begins to open up alternative models of computation that may provide means to continue scaling computational capabilities as hardware needs to evolve towards power and reliability constraints that exist in more like biological systems. Future systems that are hybrids between traditional von Neumann design points and neuromorphic devices would help accelerate this trend.

2. STEM ==> STEAM
The second is cultural. Today there is a cultural (tools, algorithms, languages, approach, etc.) divide between those that use computers to do math, science and engineering and those that use them for nearly everything else. There is a deep need to re-integrate these worlds. Those that program close to the metal with those that have little interest in performance. Those that are driven by the structure of the problem with those that are driven by the elegance of the code. Its only by reconnecting the deep “maker” inside of everyone that we will have the diverse skill base needed to truly tackle the future HPC and scientific computing challenge of the future, where its possible to understand and re-invent periodically the entire computing environment. This is what true co-design for future HPC will require.

What You Do To Recharge:
Regarding what I do to recharge. I like to travel, camp and pursue outdoor adventures. Kayaking, dog sledding, snowshoeing and canoeing are some things that I enjoy that I don’t get to do enough of. I also enjoy cooking and entertaining friends with my wife. Sometimes I like to cook with a traditional Dutch oven over an open fire. I also enjoy learning new areas of science… recently I’ve been spending down time studying cancer, neuroscience and watching Doctor Who with my daughters.

 

Pete UngaroPete Ungaro

Short Bio:
Ungaro is President and Chief Executive Officer of Cray, Inc., and has successfully continued the tradition of this distinctly American supercomputing company with each generation of Cray supercomputer, including the next-generation “Shasta” line, selected to be the flagship “pre-exascale” supercomputer at Argonne National Laboratory.

HPC Challenge & Opportunity:
While it is clear that growing power constraints are a huge technology challenge, I believe the biggest challenge is the difficulty of getting high levels of performance out of the new system architectures this energy challenge is driving. Very large numbers of heterogeneous cores in a single node, unbalanced compute to communications capabilities, expanded and very deep memory and storage hierarchies, and a requirement for huge amounts of application parallelism are making it challenging to get good performance. At Cray, we are busy at work with our R&D team experimenting with different technologies including new many-core processor architectures, ways to control the power usage at a total system and application level, programming tools that help expose parallelism, and techniques and tools to help optimize placement and automate data movement in the hierarchy. The other major focus for us right now is working on ways to integrate the traditional modeling and simulation done on supercomputers with the ever-increasing need to do analytics on all the data that is being generated.  Our vision is to bring these two areas together in a single, adaptive supercomputer that can completely abolish the need to move data between multiple systems to properly handle these two very different but challenging workloads, and handle the entire workflow on a single system. Without a doubt it is a very exciting time in the world of supercomputing!

What You Do To Recharge:
With three kids busy with school and sports, that is where most of my extra time goes. My daughter is a senior in high school and a very good volleyball player, and my twin boys are in 8th grade and between the two of them play a number of sports such as football, lacrosse, basketball and parkour. Keeping up with all of them is a job in itself, but something I really enjoy doing!  My daughter recently talked me into joining CrossFit with her.  I’ve found that it’s an amazing way to recharge and stay in shape, so I can keep up with both my kids and everything we have going on at Cray.

 

Gil WeigandGil Weigand,

Short Bio:
Weigand is the Director of Strategic Programs in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). In this position Weigand develops new initiatives that integrate, consolidate, and focus the significant gains in energy S&T and computational-science capabilities on important global challenges related to energy, the environment, healthcare, emerging urban sustainability, and national security. Weigand has served in several management positions within the Department of Energy (DOE) during the late 1990s and received the Secretary of Energy Gold Medal in 1996.

HPC Challenge & Opportunity:
I will leave discussions of the computer system challenges to my computer engineering friends… Let me speak for a minute though about an exciting and critically important application challenge that requires all of the leadership HPC we can muster: healthcare. Today the focus is treatment and we are a compassionate country and fund significantly this focus. The key however to truly living longer, and I mean substantially longer, is understanding prevention. To this end there are models of lifespan that include a significant coupling and feedback among the exosome and the genomics and other omics. This coupling is well known but not well understood. The technical challenge is its understanding and modeling so that lifespan simulations can be created to study the appropriate bio-feedback and delivery systems needed to drive the innovation and discovery in behavioral, pharma, clinical process, device, and other healthcare area. This can only be done with large scale HPC. The payoff is enormous for our society…not only from a compassionate point of view but also an economic one too.  A healthier population is a more productive population.

What You Do To Recharge:
I am an avid hiker… I live in the Cherokee National Forest, not far from Knoxville, TN. In my part of the world, it is possible to hike, simply by walking out my front door.  Or, you could drive between 15 to 30 minutes and literally be at the beginnings of hundreds of trailheads in the Pisgah and Cherokee National Forest or Great Smoky Mountains National Park.

 

Thomas ZachariaThomas Zacharia,

Short Bio:
Zacharia is Deputy Director for Science and Technology at Oak Ridge National Laboratory and a main driver behind Leadership Computing and in building ORNL’s scientific computing program and recently returned from Qatar where he initiated a national research program and construction of world class scientific infrastructure to support the Qatar National Research Strategy.

HPC Challenge & Opportunity:
The biggest technology challenge that I see is the dual one of creating a balanced and easily programmable machine.
Balance is principally a hardware problem.  As “apex machines” exascale computers will be few in number – but will need to serve the same applications spectrum as lower tier machines. Thus, there will be a premium on exascale architectures that can be balanced (perhaps dynamically) for optimal performance on a broad range of applications and their various algorithms.

Programmability is essentially a software problem.  NSCI teams will be developing machines that cope with the increasingly visible limits of CMOS while exhibiting ever higher LINPACK benchmark results. These successes will come at the expense of programmability. The more difficult to program a computer becomes, the less accessible it is to its potential user base. To prevent “apex machines” from becoming uncoupled from the broader spectrum of users and their various (current and future) applications, extraordinary efforts must be made to develop software that will shield users from arcane details of exascale machine architectures while maintaining very high levels of application performance.

The balance part of the technology challenge can be overcome by closely coupling machine architecture development with a deep understanding of applications algorithm performance. This need is already understood under the label “codesign”.
The programmability part of the challenge is another matter. If the history of high performance computing is an indicator, progress here is concept limited.

What You Do To Recharge:
I enjoy escaping to our small farm by the Tennessee River to enjoy the peace and quiet and to spend time with family and friends out on the lake.  I also enjoy working on my 1976 Jaguar XJ12 Coupe. The XJ12 coupes are beautiful but affordable cars that are rare.  Maintaining is a whole another matter!  The XJ12C also brings fond memories of Jaguar, the fastest Supercomputer in the world back in 2009-2010.

More live coverage of SC15.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced computing technologies for the AI and exascale era. "Over th Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has now encompassed CPUs offered by the leading public cloud serv Read more…

By Doug Black

Medical Imaging Gets an AI Boost

December 3, 2019

AI technologies incorporated into diagnostic imaging tools have proven useful in eliminating confirmation bias, often outperforming human clinicians who may bring their own prejudices. Another issue slowing progress is t Read more…

By George Leopold

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science itself. At SC19, Steve Squyres’ opening keynote recounting th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

AI Needs Intelligent HPC infrastructure

Artificial Intelligence (AI) has revolutionized entire industries and enables humanity to solve some of the most daunting challenges. To accomplish this, it requires massive amounts of data from heterogeneous sources that is processed it new ways that differs significantly from HPC applications. Read more…

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

How the Gordon Bell Prize Winners Used Summit to Illuminate Transistors

November 22, 2019

At SC19, the Association for Computing Machinery (ACM) awarded the prestigious Gordon Bell Prize to the Swiss Federal Institute of Technology (ETH) Zurich. The Read more…

By Oliver Peckham

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This