HPCwire Debuts Outstanding Leadership Awards at SC15

By John Russell

November 16, 2015

This year HPCwire has established a new category within its Annual Readers and Editors Choice Awards program to recognize Outstanding Leadership in HPC. We realize there is no single preeminent leader in HPC and that’s a good thing. The diversity of opinion and expertise is a major driver of progress. There are many individuals whose accomplishment and influence within HPC (and beyond) represent important leadership and are deserving of recognition.

In that vein, we think the inaugural group of nominees well represents the impressive work and achievement that everyone in the HPC community aspires to. The group encompasses a wide range of disciplines and roles, all of which are necessary to advance HPC and its impact on society. So while HPCwire readers and editors have already selected “winners” – you’ll have to discover them elsewhere – it’s entirely appropriate to shine a spotlight on all of the nominees.

The 2015 HPCwire Outstanding Leadership nominees include:

  • Jack Dongarra, University of Tennessee
  • Patricia K. Falcone, Lawrence Livermore National Laboratory
  • Rajeeb Hazra, Intel
  • Satoshi Matsuoka, Tokyo Institute of Technology
  • Horst Simon, Lawrence Berkeley National Laboratory
  • Thomas Sterling, Indiana University
  • Rick Stevens, Argonne National Laboratory
  • Pete Ungaro, Cray
  • Gil Weigand, Oak Ridge National Laboratory
  • Thomas Zacharia, Oak Ridge National Laboratory

These are of course familiar names within the HPC community. HPCwire asked each of the nominees to submit a short bio and to answer two questions: 1) Within your domain of expertise, what do you see as the biggest technology challenge facing HPC progress and how is that likely to be overcome? 2) Tell us something that few know about you with regard to your interests and what recharges you outside of work.

Their answers, which we present here, are as diverse as the group – who knew Satoshi Matsuoka was a Karaoke fan or that Thomas Zacharia has a passion for his vintage Jaguar coupe or that Horst Simon recently took up surfing! – Enjoy.

Jack DongarraJack Dongarra

Short Bio:
Dongarra is a University Distinguished Professor in the Electrical Engineering and Computer Science Department at the University of Tennessee and researcher at Oak Ridge National Lab. He is author of the LINPACK benchmark and co-author of the TOP500 list and the High Performance Conjugate Gradient Benchmark (HPCG), Dongarra as been a champion of the need for algorithms, numerical libraries, and software for HPC, especially at extreme scale, and has contributed to many of the numerical libraries widely used in HPC.

HPC Challenge & Opportunity:
While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision. We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.

What You Do To Recharge:
Outside of my research into HPC I enjoy photography and watching and interacting with our two grandchildren.

 

Patricia K. FalconePatricia K. Falcone

Short Bio:
Falcone is the Deputy Director for Science and Technology at the Lawrence Livermore National Laboratory (LLNL) in Livermore, California. She is the principal advocate for the Laboratory’s science and technology base and oversees the strategic development of the lab’s capabilities. A member of the senior management team, she is responsible for the lab’s collaborative research with academia and the private sector, as well as its internal investment portfolio, including Laboratory Directed Research and Development.

HPC Challenge & Opportunity:
In my view, the biggest challenge facing HPC progress is unlocking the creativity and innovation of talented folks across multiple domains including industry, academia, and laboratories and research institutes, in an integrated manner. There are both big challenges and big opportunities, but neither the challenges will be met nor the opportunities realized without focused efforts to push boundaries as well as targeted collaborations that yield benefits in myriad application spaces. Also necessary is bringing along talent and interest among young scholars, benefiting from creative research and technology disruptions, and working together to achieve ever increasing performance and enhanced impacts for scientific discovery, national security, and economic security.

What You Do To Recharge:
Personally, outside of work I enjoy family and community activities, as well as reading and the arts.

 

Rajeeb HazraRajeeb Hazra

Short Bio:
Hazra is Vice President, Data Center Group & General Manager of Enterprise and HPC Platforms Group is responsible for all technical computing across high-performance computing and workstations. Hazra has said that this new world of “HPC Everywhere” will require unprecedented innovation, which ties directly into Hazra’s driving of Intel’s code modernization efforts and investment in its Intel Parallel Computing Centers Program.

HPC Challenge & Opportunity:
Let me approach this question from a different angle. You’ve asked what I see as the biggest technology challenge facing HPC progress, but the barrier – the hurdle – is much bigger than any one technology.  Traditional HPC is evolving into a new era with computational and data analytics capabilities we’ve never come close to experiencing before. The biggest hurdle we face in driving HPC progress is one of rethinking our approaches to problem solving and educating and training the community at large to better understand how to use the evolving HPC platforms and take full advantage of unprecedented levels of parallel performance.

With that being said, the technical challenge is to re-architect HPC systems in such a way as to achieve significantly reduced latency, orders of magnitude performance improvement in bandwidth, and deliver balanced systems that can accommodate both compute- and data-intensive workloads on the same platform.  HPC elements such as fabric, memory, storage, continue to evolve and get better every year.  But architecting future HPC systems to be able to integrate the latest elements as they become available is a new direction.  With the right system framework and the latest innovations in processors, memory, fabric, file systems and a new HPC software stack, we are setting the stage for an extended period of rapid scientific discovery and tremendous commercial innovation that will change the landscape of private industry, academia and government research.

We also believe HPC progress will be shaped by the field of machine learning, an area we have been researching at Intel labs for several years.  You will be hearing a lot more about machine learning throughout the rest of this decade and Intel is fully committed to driving leadership in this exciting area.

Most people are starting to recognize that Intel has evolved our HPC business over the years to be so much more than just a processor company. Everything I’ve mentioned in this discussion refers to foundational elements of the Intel® Scalable System Framework, our advanced architectural approach for designing HPC systems with the performance capabilities necessary to serve a wide range of workloads such as traditional HPC, Big Data, visualization and machine learning.

This holistic, architectural approach is indeed the industry-changing technical challenge but one that we are well on the way to solving with our Intel® Scalable System Framework.

What You Do To Recharge:
I think like most people in this industry, I deeply value the time I get to spend with my family. I travel a great amount, so my down time is usually not scripted. I like to explore various interests and it’s often something spontaneous that sparks my passion at any given time. I thoroughly enjoy photography and find it both relaxing and stimulating. One thing most people wouldn’t know about me is my passion for music. The right music can lift your spirits and change your perspective. I enjoy listening to emerging music artists from around the world, and I appreciate all types of music but particularly fusion.

 

Satoshi MatsuokaSatoshi Matsuoka

Short Bio:
Matsuoka is a professor at Tokyo Institute of Technology (TITech) and a leader of the TSUBAME series of supercomputers. Matsuoka built TITech into an HPC leader, adopting GPGPUs and other energy efficient techniques early on and helping put Japan on a course for productive exascale science. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, the ACM Gordon Bell Prize for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.

HPC Challenge & Opportunity:
Over the long term the biggest challenge is the inevitable end of Moore’s law. HPC has been the leader in accelerating computing for the past decades, whereupon we have seen x1000 increase in performance every 10 years. Such an exponential trend has allowed innovative, qualitative, and in fact revolutionary changes in the science and engineering enabled by HPC, as well as normal computing where devices such as smartphones which would have been science fiction items to become a ubiquitous reality, again revolutionizing the society.

However, with Moore’s law ending, we shall no longer rely on lithographics improvements increasing the transistor counts exponentially at constant power, potentially nullifying such advances. This would also be a serious crisis for the HPC industry as a whole, whereby without such performance increase there will no longer be strong incentives to replace machines every 3-5 years, significantly compromising their business and/or result in steep rise in cost for the infrastructure. We are already observing this in various metrics such as the Top500 performance increase obviously slowing down in the past several years, and the significant delay in the deployment of exascale systems.

As such, as a community, it is essential that we embark on bold research measures to look for alternative means to continue the trend of performance increase, ideally with some other parameters than transistors for compute, ideally exponentially for the coming years. I am in the process of launching several projects in the area along with the best collaborators both inside and outside Japan. I hope this will become the community research objective.

What You Do To Recharge:
By nature I have strong affinity to technology advances meeting human bravely and ingenuity to compete and advance the state-of-the-art. One hobby I have is to follow and study the global space programs, from the days of Sputnik and Apollo, to recent achievements such as the New Horizons Pluto flyby. I frequent American and Japanese space sites such as the Johnson Space Center, being awed by the majestic presence of the Saturn V LV, but the most memorable recent moment was the visit to the Cosmonaut Museum during my recent trip to Moscow, observing the Russian space history, and how they competed with the US at the time, such as Luna 3, N1 and the Buran shuttle.

Similarly, sport competition of the same nature, such as Formula One racing, is something I have followed for the last 30 years, and saw not only memorable racing moments but also the significant advances in the machine technology, especially the recent hybrid 1.6-litre Turbo cars that retained the speed but with amazing fuel efficiency. Of course most races I watch on TV but sometimes go to the circuit to enjoy the live ambience. Now if only the Grand Prix in Austin would be a week closer to SC I would be able to attend both…

In that respect Karaoke is also of the same realm where now in Japan every machine has a sophisticated scoring system to judge your singing, and I often take it as a challenge how good the machine would think of my talents 🙂

 

Horst SimonHorst D Simon

Short Bio:
Simon, an internationally recognized expert in computer science and applied mathematics, was named Berkeley Lab’s Deputy Director on September 13, 2010. Simon joined Berkeley Lab in early 1996 as Director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Before becoming Deputy Lab Director, he served as Associate Lab Director for Computing Sciences, where he helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. Simon holds an undergraduate degree in mathematics from the Technische Universtät, in Berlin, Germany, and a Ph.D. in Mathematics from the University of California, Berkeley.

Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 Gordon Bell Prize for parallel processing research. He was also a member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co- editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

HPC Challenge & Opportunity:
Let’s imagine 2030 for a moment. The one certain fact about 2030 is that all the ten people on your list will be 15 years older and probably all of us will be in retirement. The other certain fact is that growth in computer performance will have slowed down even further. We can see this slowdown already now, for example how the date when we will reach exascale has been pushed out further into the future. These exascale challenges have been discussed at lengths, and we all agree what they are, power consumption, massive parallelism etc.  But what’s important to keep in mind is that some time between 2025 and 2030 there will be a huge change. Technology will change, because CMOS based semiconductors will no longer keep up, and people will change, because a new generation born after 2000 will take over HPC leadership.

What You Do To Recharge:
I like to say that being one of the senior managers of a national lab is my job, and doing mathematics is my hobby. What recharges me is an evening the City: good food and movies, theater, ballet, or opera. For outdoors: I like bicycling, and recent started to try surfing, but only the baby waves in Linda Mar. Something that only few people know: I spent a few weeks in Kenya last summer working as volunteer in a Christian school, teaching 8th grade math – there you have it, I just can’t get away from mathematics.

 

Thomas SterlingThomas Sterling

Short Bio:
Sterling is Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and Professor of informatics and computing at Indiana University. Dr. Sterling’s current research focuses on the ParalleX advanced execution model to guide the development of future generation Exascale hardware and software computing systems as well as the HPX runtime system to enable dynamic adaptive resource management and task scheduling for significant improvements in scalability and efficiency. This research has been conducted through multiple projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Since receiving his PhD from MIT in 1984, Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, design, and operation in industry, government labs, and higher education. Dr. Sterling has received numerous awards and in 2014 was named a Fellow of the American Association for the Advancement of Science for his efforts to advance science and its applications. Well-known as the “father of Beowulf” for his research in commodity/Linux cluster computing, Dr. Sterling has conducted research in parallel computing including superconducting logic, processor in memory, asynchronous models of computing, and programming interfaces and runtime software.

HPC Challenge & Opportunity:
While many would cite power and reliability as the principal challenges facing HPC progress, I think the key factors inhibiting continued advancement are starvation (inadequate parallelism), overhead (work to manage hardware and software parallelism), latency (time for remote actions), and contention (the inverse of bandwidth and throughput). Of course there are certain classes of workload that will scale well, perhaps even to exascale. But I worry about those more tightly, irregular, time-varying, and even strong-scaled applications that are less well served by conventional practices in programming and system structures.

What You Do To Recharge:
Sailing. It’s about the only thing I do during which I do not think of work for extended time. I don’t so much get on a sailboat as put it on, and become one with the wind and waves.

 

Rick StevensRick Stevens

Short Bio:
Stevens is associate Laboratory Director of Computing, Environment, and Life Sciences at Argonne National Laboratory, helped build ANL Mathematics and Computing Science division into a leading HPC center and has been co-leading DOE planning effort for exascale computing.

HPC Challenge & Opportunity
I think there are two long term trends that offer both opportunity and research challenges.

1. HPC + Data ==> Mechanism + Learning
The first is the need to combine traditional simulation oriented HPC architectures and software stacks with data intensive architecture and software stacks. I think this will be possible through the creation of mechanisms to enable users to combine not only their own code from these two areas but entire community contributed software stacks. Containers and related technologies. A special case of this new kind of integrated applications are those that combine mechanistic models with statistical learning models. Machine learning provides a new dimension to exploit for approximate computing and begins to open up alternative models of computation that may provide means to continue scaling computational capabilities as hardware needs to evolve towards power and reliability constraints that exist in more like biological systems. Future systems that are hybrids between traditional von Neumann design points and neuromorphic devices would help accelerate this trend.

2. STEM ==> STEAM
The second is cultural. Today there is a cultural (tools, algorithms, languages, approach, etc.) divide between those that use computers to do math, science and engineering and those that use them for nearly everything else. There is a deep need to re-integrate these worlds. Those that program close to the metal with those that have little interest in performance. Those that are driven by the structure of the problem with those that are driven by the elegance of the code. Its only by reconnecting the deep “maker” inside of everyone that we will have the diverse skill base needed to truly tackle the future HPC and scientific computing challenge of the future, where its possible to understand and re-invent periodically the entire computing environment. This is what true co-design for future HPC will require.

What You Do To Recharge:
Regarding what I do to recharge. I like to travel, camp and pursue outdoor adventures. Kayaking, dog sledding, snowshoeing and canoeing are some things that I enjoy that I don’t get to do enough of. I also enjoy cooking and entertaining friends with my wife. Sometimes I like to cook with a traditional Dutch oven over an open fire. I also enjoy learning new areas of science… recently I’ve been spending down time studying cancer, neuroscience and watching Doctor Who with my daughters.

 

Pete UngaroPete Ungaro

Short Bio:
Ungaro is President and Chief Executive Officer of Cray, Inc., and has successfully continued the tradition of this distinctly American supercomputing company with each generation of Cray supercomputer, including the next-generation “Shasta” line, selected to be the flagship “pre-exascale” supercomputer at Argonne National Laboratory.

HPC Challenge & Opportunity:
While it is clear that growing power constraints are a huge technology challenge, I believe the biggest challenge is the difficulty of getting high levels of performance out of the new system architectures this energy challenge is driving. Very large numbers of heterogeneous cores in a single node, unbalanced compute to communications capabilities, expanded and very deep memory and storage hierarchies, and a requirement for huge amounts of application parallelism are making it challenging to get good performance. At Cray, we are busy at work with our R&D team experimenting with different technologies including new many-core processor architectures, ways to control the power usage at a total system and application level, programming tools that help expose parallelism, and techniques and tools to help optimize placement and automate data movement in the hierarchy. The other major focus for us right now is working on ways to integrate the traditional modeling and simulation done on supercomputers with the ever-increasing need to do analytics on all the data that is being generated.  Our vision is to bring these two areas together in a single, adaptive supercomputer that can completely abolish the need to move data between multiple systems to properly handle these two very different but challenging workloads, and handle the entire workflow on a single system. Without a doubt it is a very exciting time in the world of supercomputing!

What You Do To Recharge:
With three kids busy with school and sports, that is where most of my extra time goes. My daughter is a senior in high school and a very good volleyball player, and my twin boys are in 8th grade and between the two of them play a number of sports such as football, lacrosse, basketball and parkour. Keeping up with all of them is a job in itself, but something I really enjoy doing!  My daughter recently talked me into joining CrossFit with her.  I’ve found that it’s an amazing way to recharge and stay in shape, so I can keep up with both my kids and everything we have going on at Cray.

 

Gil WeigandGil Weigand,

Short Bio:
Weigand is the Director of Strategic Programs in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). In this position Weigand develops new initiatives that integrate, consolidate, and focus the significant gains in energy S&T and computational-science capabilities on important global challenges related to energy, the environment, healthcare, emerging urban sustainability, and national security. Weigand has served in several management positions within the Department of Energy (DOE) during the late 1990s and received the Secretary of Energy Gold Medal in 1996.

HPC Challenge & Opportunity:
I will leave discussions of the computer system challenges to my computer engineering friends… Let me speak for a minute though about an exciting and critically important application challenge that requires all of the leadership HPC we can muster: healthcare. Today the focus is treatment and we are a compassionate country and fund significantly this focus. The key however to truly living longer, and I mean substantially longer, is understanding prevention. To this end there are models of lifespan that include a significant coupling and feedback among the exosome and the genomics and other omics. This coupling is well known but not well understood. The technical challenge is its understanding and modeling so that lifespan simulations can be created to study the appropriate bio-feedback and delivery systems needed to drive the innovation and discovery in behavioral, pharma, clinical process, device, and other healthcare area. This can only be done with large scale HPC. The payoff is enormous for our society…not only from a compassionate point of view but also an economic one too.  A healthier population is a more productive population.

What You Do To Recharge:
I am an avid hiker… I live in the Cherokee National Forest, not far from Knoxville, TN. In my part of the world, it is possible to hike, simply by walking out my front door.  Or, you could drive between 15 to 30 minutes and literally be at the beginnings of hundreds of trailheads in the Pisgah and Cherokee National Forest or Great Smoky Mountains National Park.

 

Thomas ZachariaThomas Zacharia,

Short Bio:
Zacharia is Deputy Director for Science and Technology at Oak Ridge National Laboratory and a main driver behind Leadership Computing and in building ORNL’s scientific computing program and recently returned from Qatar where he initiated a national research program and construction of world class scientific infrastructure to support the Qatar National Research Strategy.

HPC Challenge & Opportunity:
The biggest technology challenge that I see is the dual one of creating a balanced and easily programmable machine.
Balance is principally a hardware problem.  As “apex machines” exascale computers will be few in number – but will need to serve the same applications spectrum as lower tier machines. Thus, there will be a premium on exascale architectures that can be balanced (perhaps dynamically) for optimal performance on a broad range of applications and their various algorithms.

Programmability is essentially a software problem.  NSCI teams will be developing machines that cope with the increasingly visible limits of CMOS while exhibiting ever higher LINPACK benchmark results. These successes will come at the expense of programmability. The more difficult to program a computer becomes, the less accessible it is to its potential user base. To prevent “apex machines” from becoming uncoupled from the broader spectrum of users and their various (current and future) applications, extraordinary efforts must be made to develop software that will shield users from arcane details of exascale machine architectures while maintaining very high levels of application performance.

The balance part of the technology challenge can be overcome by closely coupling machine architecture development with a deep understanding of applications algorithm performance. This need is already understood under the label “codesign”.
The programmability part of the challenge is another matter. If the history of high performance computing is an indicator, progress here is concept limited.

What You Do To Recharge:
I enjoy escaping to our small farm by the Tennessee River to enjoy the peace and quiet and to spend time with family and friends out on the lake.  I also enjoy working on my 1976 Jaguar XJ12 Coupe. The XJ12 coupes are beautiful but affordable cars that are rare.  Maintaining is a whole another matter!  The XJ12C also brings fond memories of Jaguar, the fastest Supercomputer in the world back in 2009-2010.

More live coverage of SC15.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

“For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place and out-of-date information.” Read more…

By Oliver Peckham

Xilinx Says Its New FPGA is World’s Largest

August 21, 2019

In this age of exploding “technology disaggregation” – in which the Big Bang emanating from the Intel x86 CPU has produced significant advances in CPU chips and a raft of alternative, accelerated architectures... Read more…

By Doug Black

Supercomputers Generate Universes to Illuminate Galactic Formation

August 20, 2019

With advanced imaging and satellite technologies, it’s easier than ever to see a galaxy – but understanding how they form (a process that can take billions of years) is a different story. Now, a team of researchers f Read more…

By Oliver Peckham

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Singularity Moves Up the Container Value Chain

August 20, 2019

The enterprise version of the Singularity HPC container platform released this week by Sylabs is designed to allow users to create, secure and share the high-end containers in self-hosted production deployments. The e Read more…

By George Leopold

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

“For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place and out-of-date information.” Read more…

By Oliver Peckham

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

August 20, 2019

IBM today announced it was contributing the instruction set (ISA) for its Power microprocessor and the designs for the Open Coherent Accelerator Processor Inter Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This