HPCwire Debuts Outstanding Leadership Awards at SC15

By John Russell

November 16, 2015

This year HPCwire has established a new category within its Annual Readers and Editors Choice Awards program to recognize Outstanding Leadership in HPC. We realize there is no single preeminent leader in HPC and that’s a good thing. The diversity of opinion and expertise is a major driver of progress. There are many individuals whose accomplishment and influence within HPC (and beyond) represent important leadership and are deserving of recognition.

In that vein, we think the inaugural group of nominees well represents the impressive work and achievement that everyone in the HPC community aspires to. The group encompasses a wide range of disciplines and roles, all of which are necessary to advance HPC and its impact on society. So while HPCwire readers and editors have already selected “winners” – you’ll have to discover them elsewhere – it’s entirely appropriate to shine a spotlight on all of the nominees.

The 2015 HPCwire Outstanding Leadership nominees include:

  • Jack Dongarra, University of Tennessee
  • Patricia K. Falcone, Lawrence Livermore National Laboratory
  • Rajeeb Hazra, Intel
  • Satoshi Matsuoka, Tokyo Institute of Technology
  • Horst Simon, Lawrence Berkeley National Laboratory
  • Thomas Sterling, Indiana University
  • Rick Stevens, Argonne National Laboratory
  • Pete Ungaro, Cray
  • Gil Weigand, Oak Ridge National Laboratory
  • Thomas Zacharia, Oak Ridge National Laboratory

These are of course familiar names within the HPC community. HPCwire asked each of the nominees to submit a short bio and to answer two questions: 1) Within your domain of expertise, what do you see as the biggest technology challenge facing HPC progress and how is that likely to be overcome? 2) Tell us something that few know about you with regard to your interests and what recharges you outside of work.

Their answers, which we present here, are as diverse as the group – who knew Satoshi Matsuoka was a Karaoke fan or that Thomas Zacharia has a passion for his vintage Jaguar coupe or that Horst Simon recently took up surfing! – Enjoy.

Jack DongarraJack Dongarra

Short Bio:
Dongarra is a University Distinguished Professor in the Electrical Engineering and Computer Science Department at the University of Tennessee and researcher at Oak Ridge National Lab. He is author of the LINPACK benchmark and co-author of the TOP500 list and the High Performance Conjugate Gradient Benchmark (HPCG), Dongarra as been a champion of the need for algorithms, numerical libraries, and software for HPC, especially at extreme scale, and has contributed to many of the numerical libraries widely used in HPC.

HPC Challenge & Opportunity:
While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision. We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.

What You Do To Recharge:
Outside of my research into HPC I enjoy photography and watching and interacting with our two grandchildren.

 

Patricia K. FalconePatricia K. Falcone

Short Bio:
Falcone is the Deputy Director for Science and Technology at the Lawrence Livermore National Laboratory (LLNL) in Livermore, California. She is the principal advocate for the Laboratory’s science and technology base and oversees the strategic development of the lab’s capabilities. A member of the senior management team, she is responsible for the lab’s collaborative research with academia and the private sector, as well as its internal investment portfolio, including Laboratory Directed Research and Development.

HPC Challenge & Opportunity:
In my view, the biggest challenge facing HPC progress is unlocking the creativity and innovation of talented folks across multiple domains including industry, academia, and laboratories and research institutes, in an integrated manner. There are both big challenges and big opportunities, but neither the challenges will be met nor the opportunities realized without focused efforts to push boundaries as well as targeted collaborations that yield benefits in myriad application spaces. Also necessary is bringing along talent and interest among young scholars, benefiting from creative research and technology disruptions, and working together to achieve ever increasing performance and enhanced impacts for scientific discovery, national security, and economic security.

What You Do To Recharge:
Personally, outside of work I enjoy family and community activities, as well as reading and the arts.

 

Rajeeb HazraRajeeb Hazra

Short Bio:
Hazra is Vice President, Data Center Group & General Manager of Enterprise and HPC Platforms Group is responsible for all technical computing across high-performance computing and workstations. Hazra has said that this new world of “HPC Everywhere” will require unprecedented innovation, which ties directly into Hazra’s driving of Intel’s code modernization efforts and investment in its Intel Parallel Computing Centers Program.

HPC Challenge & Opportunity:
Let me approach this question from a different angle. You’ve asked what I see as the biggest technology challenge facing HPC progress, but the barrier – the hurdle – is much bigger than any one technology.  Traditional HPC is evolving into a new era with computational and data analytics capabilities we’ve never come close to experiencing before. The biggest hurdle we face in driving HPC progress is one of rethinking our approaches to problem solving and educating and training the community at large to better understand how to use the evolving HPC platforms and take full advantage of unprecedented levels of parallel performance.

With that being said, the technical challenge is to re-architect HPC systems in such a way as to achieve significantly reduced latency, orders of magnitude performance improvement in bandwidth, and deliver balanced systems that can accommodate both compute- and data-intensive workloads on the same platform.  HPC elements such as fabric, memory, storage, continue to evolve and get better every year.  But architecting future HPC systems to be able to integrate the latest elements as they become available is a new direction.  With the right system framework and the latest innovations in processors, memory, fabric, file systems and a new HPC software stack, we are setting the stage for an extended period of rapid scientific discovery and tremendous commercial innovation that will change the landscape of private industry, academia and government research.

We also believe HPC progress will be shaped by the field of machine learning, an area we have been researching at Intel labs for several years.  You will be hearing a lot more about machine learning throughout the rest of this decade and Intel is fully committed to driving leadership in this exciting area.

Most people are starting to recognize that Intel has evolved our HPC business over the years to be so much more than just a processor company. Everything I’ve mentioned in this discussion refers to foundational elements of the Intel® Scalable System Framework, our advanced architectural approach for designing HPC systems with the performance capabilities necessary to serve a wide range of workloads such as traditional HPC, Big Data, visualization and machine learning.

This holistic, architectural approach is indeed the industry-changing technical challenge but one that we are well on the way to solving with our Intel® Scalable System Framework.

What You Do To Recharge:
I think like most people in this industry, I deeply value the time I get to spend with my family. I travel a great amount, so my down time is usually not scripted. I like to explore various interests and it’s often something spontaneous that sparks my passion at any given time. I thoroughly enjoy photography and find it both relaxing and stimulating. One thing most people wouldn’t know about me is my passion for music. The right music can lift your spirits and change your perspective. I enjoy listening to emerging music artists from around the world, and I appreciate all types of music but particularly fusion.

 

Satoshi MatsuokaSatoshi Matsuoka

Short Bio:
Matsuoka is a professor at Tokyo Institute of Technology (TITech) and a leader of the TSUBAME series of supercomputers. Matsuoka built TITech into an HPC leader, adopting GPGPUs and other energy efficient techniques early on and helping put Japan on a course for productive exascale science. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, the ACM Gordon Bell Prize for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.

HPC Challenge & Opportunity:
Over the long term the biggest challenge is the inevitable end of Moore’s law. HPC has been the leader in accelerating computing for the past decades, whereupon we have seen x1000 increase in performance every 10 years. Such an exponential trend has allowed innovative, qualitative, and in fact revolutionary changes in the science and engineering enabled by HPC, as well as normal computing where devices such as smartphones which would have been science fiction items to become a ubiquitous reality, again revolutionizing the society.

However, with Moore’s law ending, we shall no longer rely on lithographics improvements increasing the transistor counts exponentially at constant power, potentially nullifying such advances. This would also be a serious crisis for the HPC industry as a whole, whereby without such performance increase there will no longer be strong incentives to replace machines every 3-5 years, significantly compromising their business and/or result in steep rise in cost for the infrastructure. We are already observing this in various metrics such as the Top500 performance increase obviously slowing down in the past several years, and the significant delay in the deployment of exascale systems.

As such, as a community, it is essential that we embark on bold research measures to look for alternative means to continue the trend of performance increase, ideally with some other parameters than transistors for compute, ideally exponentially for the coming years. I am in the process of launching several projects in the area along with the best collaborators both inside and outside Japan. I hope this will become the community research objective.

What You Do To Recharge:
By nature I have strong affinity to technology advances meeting human bravely and ingenuity to compete and advance the state-of-the-art. One hobby I have is to follow and study the global space programs, from the days of Sputnik and Apollo, to recent achievements such as the New Horizons Pluto flyby. I frequent American and Japanese space sites such as the Johnson Space Center, being awed by the majestic presence of the Saturn V LV, but the most memorable recent moment was the visit to the Cosmonaut Museum during my recent trip to Moscow, observing the Russian space history, and how they competed with the US at the time, such as Luna 3, N1 and the Buran shuttle.

Similarly, sport competition of the same nature, such as Formula One racing, is something I have followed for the last 30 years, and saw not only memorable racing moments but also the significant advances in the machine technology, especially the recent hybrid 1.6-litre Turbo cars that retained the speed but with amazing fuel efficiency. Of course most races I watch on TV but sometimes go to the circuit to enjoy the live ambience. Now if only the Grand Prix in Austin would be a week closer to SC I would be able to attend both…

In that respect Karaoke is also of the same realm where now in Japan every machine has a sophisticated scoring system to judge your singing, and I often take it as a challenge how good the machine would think of my talents 🙂

 

Horst SimonHorst D Simon

Short Bio:
Simon, an internationally recognized expert in computer science and applied mathematics, was named Berkeley Lab’s Deputy Director on September 13, 2010. Simon joined Berkeley Lab in early 1996 as Director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Before becoming Deputy Lab Director, he served as Associate Lab Director for Computing Sciences, where he helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. Simon holds an undergraduate degree in mathematics from the Technische Universtät, in Berlin, Germany, and a Ph.D. in Mathematics from the University of California, Berkeley.

Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 Gordon Bell Prize for parallel processing research. He was also a member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co- editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

HPC Challenge & Opportunity:
Let’s imagine 2030 for a moment. The one certain fact about 2030 is that all the ten people on your list will be 15 years older and probably all of us will be in retirement. The other certain fact is that growth in computer performance will have slowed down even further. We can see this slowdown already now, for example how the date when we will reach exascale has been pushed out further into the future. These exascale challenges have been discussed at lengths, and we all agree what they are, power consumption, massive parallelism etc.  But what’s important to keep in mind is that some time between 2025 and 2030 there will be a huge change. Technology will change, because CMOS based semiconductors will no longer keep up, and people will change, because a new generation born after 2000 will take over HPC leadership.

What You Do To Recharge:
I like to say that being one of the senior managers of a national lab is my job, and doing mathematics is my hobby. What recharges me is an evening the City: good food and movies, theater, ballet, or opera. For outdoors: I like bicycling, and recent started to try surfing, but only the baby waves in Linda Mar. Something that only few people know: I spent a few weeks in Kenya last summer working as volunteer in a Christian school, teaching 8th grade math – there you have it, I just can’t get away from mathematics.

 

Thomas SterlingThomas Sterling

Short Bio:
Sterling is Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and Professor of informatics and computing at Indiana University. Dr. Sterling’s current research focuses on the ParalleX advanced execution model to guide the development of future generation Exascale hardware and software computing systems as well as the HPX runtime system to enable dynamic adaptive resource management and task scheduling for significant improvements in scalability and efficiency. This research has been conducted through multiple projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Since receiving his PhD from MIT in 1984, Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, design, and operation in industry, government labs, and higher education. Dr. Sterling has received numerous awards and in 2014 was named a Fellow of the American Association for the Advancement of Science for his efforts to advance science and its applications. Well-known as the “father of Beowulf” for his research in commodity/Linux cluster computing, Dr. Sterling has conducted research in parallel computing including superconducting logic, processor in memory, asynchronous models of computing, and programming interfaces and runtime software.

HPC Challenge & Opportunity:
While many would cite power and reliability as the principal challenges facing HPC progress, I think the key factors inhibiting continued advancement are starvation (inadequate parallelism), overhead (work to manage hardware and software parallelism), latency (time for remote actions), and contention (the inverse of bandwidth and throughput). Of course there are certain classes of workload that will scale well, perhaps even to exascale. But I worry about those more tightly, irregular, time-varying, and even strong-scaled applications that are less well served by conventional practices in programming and system structures.

What You Do To Recharge:
Sailing. It’s about the only thing I do during which I do not think of work for extended time. I don’t so much get on a sailboat as put it on, and become one with the wind and waves.

 

Rick StevensRick Stevens

Short Bio:
Stevens is associate Laboratory Director of Computing, Environment, and Life Sciences at Argonne National Laboratory, helped build ANL Mathematics and Computing Science division into a leading HPC center and has been co-leading DOE planning effort for exascale computing.

HPC Challenge & Opportunity
I think there are two long term trends that offer both opportunity and research challenges.

1. HPC + Data ==> Mechanism + Learning
The first is the need to combine traditional simulation oriented HPC architectures and software stacks with data intensive architecture and software stacks. I think this will be possible through the creation of mechanisms to enable users to combine not only their own code from these two areas but entire community contributed software stacks. Containers and related technologies. A special case of this new kind of integrated applications are those that combine mechanistic models with statistical learning models. Machine learning provides a new dimension to exploit for approximate computing and begins to open up alternative models of computation that may provide means to continue scaling computational capabilities as hardware needs to evolve towards power and reliability constraints that exist in more like biological systems. Future systems that are hybrids between traditional von Neumann design points and neuromorphic devices would help accelerate this trend.

2. STEM ==> STEAM
The second is cultural. Today there is a cultural (tools, algorithms, languages, approach, etc.) divide between those that use computers to do math, science and engineering and those that use them for nearly everything else. There is a deep need to re-integrate these worlds. Those that program close to the metal with those that have little interest in performance. Those that are driven by the structure of the problem with those that are driven by the elegance of the code. Its only by reconnecting the deep “maker” inside of everyone that we will have the diverse skill base needed to truly tackle the future HPC and scientific computing challenge of the future, where its possible to understand and re-invent periodically the entire computing environment. This is what true co-design for future HPC will require.

What You Do To Recharge:
Regarding what I do to recharge. I like to travel, camp and pursue outdoor adventures. Kayaking, dog sledding, snowshoeing and canoeing are some things that I enjoy that I don’t get to do enough of. I also enjoy cooking and entertaining friends with my wife. Sometimes I like to cook with a traditional Dutch oven over an open fire. I also enjoy learning new areas of science… recently I’ve been spending down time studying cancer, neuroscience and watching Doctor Who with my daughters.

 

Pete UngaroPete Ungaro

Short Bio:
Ungaro is President and Chief Executive Officer of Cray, Inc., and has successfully continued the tradition of this distinctly American supercomputing company with each generation of Cray supercomputer, including the next-generation “Shasta” line, selected to be the flagship “pre-exascale” supercomputer at Argonne National Laboratory.

HPC Challenge & Opportunity:
While it is clear that growing power constraints are a huge technology challenge, I believe the biggest challenge is the difficulty of getting high levels of performance out of the new system architectures this energy challenge is driving. Very large numbers of heterogeneous cores in a single node, unbalanced compute to communications capabilities, expanded and very deep memory and storage hierarchies, and a requirement for huge amounts of application parallelism are making it challenging to get good performance. At Cray, we are busy at work with our R&D team experimenting with different technologies including new many-core processor architectures, ways to control the power usage at a total system and application level, programming tools that help expose parallelism, and techniques and tools to help optimize placement and automate data movement in the hierarchy. The other major focus for us right now is working on ways to integrate the traditional modeling and simulation done on supercomputers with the ever-increasing need to do analytics on all the data that is being generated.  Our vision is to bring these two areas together in a single, adaptive supercomputer that can completely abolish the need to move data between multiple systems to properly handle these two very different but challenging workloads, and handle the entire workflow on a single system. Without a doubt it is a very exciting time in the world of supercomputing!

What You Do To Recharge:
With three kids busy with school and sports, that is where most of my extra time goes. My daughter is a senior in high school and a very good volleyball player, and my twin boys are in 8th grade and between the two of them play a number of sports such as football, lacrosse, basketball and parkour. Keeping up with all of them is a job in itself, but something I really enjoy doing!  My daughter recently talked me into joining CrossFit with her.  I’ve found that it’s an amazing way to recharge and stay in shape, so I can keep up with both my kids and everything we have going on at Cray.

 

Gil WeigandGil Weigand,

Short Bio:
Weigand is the Director of Strategic Programs in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). In this position Weigand develops new initiatives that integrate, consolidate, and focus the significant gains in energy S&T and computational-science capabilities on important global challenges related to energy, the environment, healthcare, emerging urban sustainability, and national security. Weigand has served in several management positions within the Department of Energy (DOE) during the late 1990s and received the Secretary of Energy Gold Medal in 1996.

HPC Challenge & Opportunity:
I will leave discussions of the computer system challenges to my computer engineering friends… Let me speak for a minute though about an exciting and critically important application challenge that requires all of the leadership HPC we can muster: healthcare. Today the focus is treatment and we are a compassionate country and fund significantly this focus. The key however to truly living longer, and I mean substantially longer, is understanding prevention. To this end there are models of lifespan that include a significant coupling and feedback among the exosome and the genomics and other omics. This coupling is well known but not well understood. The technical challenge is its understanding and modeling so that lifespan simulations can be created to study the appropriate bio-feedback and delivery systems needed to drive the innovation and discovery in behavioral, pharma, clinical process, device, and other healthcare area. This can only be done with large scale HPC. The payoff is enormous for our society…not only from a compassionate point of view but also an economic one too.  A healthier population is a more productive population.

What You Do To Recharge:
I am an avid hiker… I live in the Cherokee National Forest, not far from Knoxville, TN. In my part of the world, it is possible to hike, simply by walking out my front door.  Or, you could drive between 15 to 30 minutes and literally be at the beginnings of hundreds of trailheads in the Pisgah and Cherokee National Forest or Great Smoky Mountains National Park.

 

Thomas ZachariaThomas Zacharia,

Short Bio:
Zacharia is Deputy Director for Science and Technology at Oak Ridge National Laboratory and a main driver behind Leadership Computing and in building ORNL’s scientific computing program and recently returned from Qatar where he initiated a national research program and construction of world class scientific infrastructure to support the Qatar National Research Strategy.

HPC Challenge & Opportunity:
The biggest technology challenge that I see is the dual one of creating a balanced and easily programmable machine.
Balance is principally a hardware problem.  As “apex machines” exascale computers will be few in number – but will need to serve the same applications spectrum as lower tier machines. Thus, there will be a premium on exascale architectures that can be balanced (perhaps dynamically) for optimal performance on a broad range of applications and their various algorithms.

Programmability is essentially a software problem.  NSCI teams will be developing machines that cope with the increasingly visible limits of CMOS while exhibiting ever higher LINPACK benchmark results. These successes will come at the expense of programmability. The more difficult to program a computer becomes, the less accessible it is to its potential user base. To prevent “apex machines” from becoming uncoupled from the broader spectrum of users and their various (current and future) applications, extraordinary efforts must be made to develop software that will shield users from arcane details of exascale machine architectures while maintaining very high levels of application performance.

The balance part of the technology challenge can be overcome by closely coupling machine architecture development with a deep understanding of applications algorithm performance. This need is already understood under the label “codesign”.
The programmability part of the challenge is another matter. If the history of high performance computing is an indicator, progress here is concept limited.

What You Do To Recharge:
I enjoy escaping to our small farm by the Tennessee River to enjoy the peace and quiet and to spend time with family and friends out on the lake.  I also enjoy working on my 1976 Jaguar XJ12 Coupe. The XJ12 coupes are beautiful but affordable cars that are rare.  Maintaining is a whole another matter!  The XJ12C also brings fond memories of Jaguar, the fastest Supercomputer in the world back in 2009-2010.

More live coverage of SC15.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire