Expanding the HPC Ecosystem

By Steve Conway

July 28, 2006

Over the past three years, the Council on Competitiveness has sponsored pioneering studies and conferences on the relationship between HPC and business competitiveness, under the direction of Council Vice President Suzy Tichenor. In January 2006, Bob Graybill, former DARPA HPCS program manager and current division director of USC's Information Sciences Institute, became a senior advisor to the Council. He is helping to guide the Council's HPC Initiative as it works to link together government, academic and business organizations in a “national ecosystem” aimed at advancing innovation and competitiveness through greater use of HPC.

In this exclusive HPCwire interview, Tichenor and Graybill discuss the importance of HPC for businesses and preview information that will be disclosed in more detail at the Council's annual HPC Users Conference on September 7.

HPCwire: When and why did the Council begin looking at the importance of HPC in the private sector?

Tichenor: We've been looking at this for several years, as an outgrowth of our work in innovation. The Council believes that for the U.S. to remain preeminent in global markets, to increase productivity and raise our standard-of-living, we as a nation must become more innovation-based. If work is routine, rule-based, if it can be digitized and reliably codified, there's going to be a low-cost source of labor somewhere in the world to compete for that work and for those jobs.

Our competitive strength is in our ability to be more innovative. So the questions becomes, how do you promote, finance and educate for innovation, and what kind of infrastructure is needed to support this? That's where HPC comes in. We believe there's a need for pervasive access to and use of supercomputing. Three years ago, we launched an initiative to identify how HPC is really being used by businesses, how this is linked to innovation and what challenges prevent wider use of HPC in the business sector.

HPCwire: In a nutshell, what did you find?

Tichenor: First and foremost, we found that for companies that rely on it today, HPC is absolutely essential to business survival. It's not just a “nice to have” tool. We also identified some challenges to more widespread adoption of HPC by business organizations. There's a need for more production-quality application software, for better interfaces, and for more people who know how to use HPC as a production tool. Businesses are telling us that access to talent is a pacing item. This brought us to the issue of education and how to make people more comfortable using HPC.

HPCwire: Was any of this surprising?

Tichenor: Some people who've been immersed in HPC for years have understood the situation, but our studies were the first in-depth, market-based research findings on this topic, and they surprised many people. A lot of thinking in HPC has been focused on how to build better computer systems. The Council is more interested in how these systems can be used most effectively to drive business success and competitiveness.

Unfortunately, HPC is still a niche market within the overall computing market. Our research looked at the full spectrum of HPC users and discovered a bimodal pattern. There is a small group of high-end users and a much larger group of entry-level users, but not many in the middle. We call this gap in the spectrum the “missing middle.” There is another large group of people who are doing technical computing on the desktop, but haven't used HPC and don't understand its benefits. We call this group the “never evers.” The Council is not only trying to address the important needs of high-end HPC users, we are also focusing on how to fill the “missing middle” and how to encourage the “never evers” to adopt HPC for greater competitiveness.

HPCwire: So, how can you fill in the “missing middle” of the market and convince the “never evers” to use HPC?

Tichenor: That's what we're exploring now. We have a tremendous HPC Advisory Committee that has been meeting for a few years now. This is a brain trust of senior executives from the government, academia, private industry, vendors and other key constituencies. Collectively, we see a need to develop mechanisms for reaching out to the “never evers” and entry-level HPC users, so we can expose them more to HPC and grow the market. This will require interesting partnerships that connect the business community with universities, national labs and other parties, not just for access to cycles, but also for access to expertise. The Council and our HPC Advisory Committee want to figure out how to leverage this expertise to provide greater ROI for the country. We will launch pilot programs to help introduce companies to HPC and do many other things that Bob can talk more about.

One reason we're so excited is that we believe HPC is undervalued in many regions of the U.S. Many businesses and other organizations are not aware that we have these HPC assets, including the on-demand services being developed by some vendors, that could help stimulate regional economic development. The Council has a significant program on regional economic development, and we want to see HPC integrated into these regional development plans. HPC can also be a tool for attracting companies to a region.

There are already some interesting partnership models out there. Exploring these models is one of the themes of the Council's September 7 HPC Users Conference in Washington. We recently did two surveys for the NSF and DOE's NNSA, to look at where their HPC-related partnerships with industry have been successful and where the stumbling blocks are. Overall, it turns out that these public-private partnerships have been tremendously successful, yet many of the participating businesses said they were unaware of these valuable HPC resources before starting the program. We need to change that, because these partnerships are a win-win for everyone. The businesses become more competitive through their interactions with the universities and labs and their access to more advanced HPC systems and expertise, and the sponsors advance their own problem solving through techniques they learn from industry.

HPCwire: Are there other areas for expanding HPC usage that you are exploring?

Tichenor: We see strong potential for extending HPC usage through the supply chain, wherever appropriate. This began happening years ago in some more mature HPC markets, such as the automotive sector. In other sectors today, however, large companies use HPC but their suppliers don't.

In our September conference, we'll have a number of companies speaking about this, including Wal-Mart, whose requirements drive product development for many of its suppliers. For example, Wal-Mart might require a large consumer products company to re-think it's packaging, and the consumer products company might then have to meet with its suppliers. Some consumer products companies already use HPC. You've probably heard the example of Procter & Gamble using HPC to redo the manufacturing process for Pringles. We want to explore how these firms can extend that expertise to their suppliers to make the entire supply chain more competitive. There is also an important need for HPC in optimizing the entire supply chain process. This will also be discussed in our conference.

HPCwire: Bob, you had a major impact in shaping the HPC industry at DARPA and then became division director of USC's Information Sciences Institute. How did your connection with the Council happen?

Graybill: Based on my prior experiences, especially at DARPA, I saw an opportunity to help the U.S. private sector exploit HPC more fully for greater competitiveness. As you know, an important goal of DARPA's HPCS program has been to develop a commercially viable HPC system that can deliver breakthrough sustained performance and productivity across a spectrum of national security and other applications, including applications important to industry.

I approached various organizations to gauge their interest in helping me explore this opportunity. The Council was very interested, which wasn't really surprising, given the strong groundwork they had laid through research and discussions during the past three years. During that time the Council did a tremendous job of fact-finding, and this investigative work will be an ongoing effort. The natural progression, however, was to ask, “Now what do we do with all this information?” Based on the Council's findings and the bird's eye view of the HPC industry I acquired at DARPA, it was clear that, at least where industry was concerned, HPC might remain a niche market forever. If that happens, our companies and the country will lose out on a real opportunity to accelerate innovation and competitiveness. We need to actively work to create a national HPC ecosystem that businesses can use.

The Council is an ideal starting point for this initiative. Through their HPC Advisory Committee and their staff, the Council is working with the highest levels of industry, government decision-makers and labs, academia, vendors, and other HPC stakeholders. Equally important, the Council understands the need to make a business case for an HPC ecosystem. Recent studies the Council has undertaken, including the new NSF and NNSA studies that will be discussed for the first time at the Council's HPC Users Conference on September 7, show that there are already some successful models that the business ecosystem could expand on.

HPCwire: What is your role at the Council, Bob?

Graybill: My role is as a senior advisor and my objective is to work with the Council to drive the formation of this ecosystem, which we're calling the National Innovation Collaboration Ecosystem, or NICE for short. NICE will link together the key HPC constituencies to share expertise and thinking, including organizations from government, academia and industry, vendors and others. It will also serve as an information exchange to help businesses gain access to HPC hardware, software, networking resources, and expertise.

The aim of NICE is to boost the global competitiveness of U.S. businesses by creating a collaborative HPC infrastructure that will help our firms transform ideas into usable products. U.S. businesses can't compete globally based on hourly labor rates, as Suzy said; we have to compete through ideas and innovation. Leading U.S. corporations in a variety of industries are already doing this by using HPC for virtual prototyping, “what-if” analyses and other forms of modeling and simulation. We need to make sure all companies, regardless of size, have the kind of ISV software, expertise and other HPC resources they need to remain competitive, and we need to encourage more pervasive use of HPC throughout the supply chain. As part of the NICE initiative, we also need to do more to promote HPC as an important, exciting career path in our high schools and universities. We need to renew that talent stream for the future.

HPCwire: How will you go about organizing the National Innovation Collaboration Ecosystem?

Graybill: Again, the Council provides a strong starting point. They work with many government agencies, universities, and private sector industries. The Council's HPC Advisory Committee will serve as a brain trust and provide valuable oversight. Through my time at DARPA, I have gained considerable experience working with a diverse community in support of a common goal. It's essential to bring together many organizations and to have all the major constituencies involved.

There are six key organizing areas. First, we need to understand and incorporate the dynamics of the market and the users. Second, we need to do industry pilot studies. Third, for the purposes of this ecosystem we need to converge on a backbone infrastructure for high performance computing and communication that's standards-based to the extent possible. After that, the next step is to create an HPC Innovation Service Portal where businesses, especially the “never-evers” and entry-level HPC users, can access HPC expertise, cycles, etc. Some of these companies don't need to use HPC every day and can't justify creating their own infrastructures.

The fifth key element is robust applications software that's scalable and ready to use. This will be challenging to accomplish, but the Council's studies have shown that improving applications software is extremely important for industry, and a large majority of the ISVs and businesses that were surveyed are willing to partner with outside organizations to improve the software. We will be a catalyst to help make this happen.

Last and definitely not least, the NICE ecosystem needs to focus on training and education. We need to reach out to universities and high schools, to help create new generations of students who are excited about careers in HPC and are prepared to help advance the HPC industry and U.S. competitiveness.

HPCwire: How would the ecosystem be funded and managed for the longer term?

Graybill: Various players have strong interest in different parts of the ecosystem. The idea is to get them to lead in their areas of interest. Where long-term management of the ecosystem is concerned, we'll need to see along the way where that would best reside. The Council's job and my job is to get the ball rolling and stay involved. For that to happen, all six of the areas I described need to move forward in parallel. All of us who've been connected to the HPC industry for a while realize that moving hardware forward faster than software is problematic, and vice versa. All of the elements must work and evolve together.

Again, we're not starting from scratch. The Council has done a lot of related work through its conferences and studies, and we're aware of some public-private sector programs in the U.S. that could serve as effective models for what we're aiming to do on a larger scale.

HPCwire: How unusual is it to have a formal initiative to help meet the private sector's HPC needs? Are other countries doing this?

Graybill: To our knowledge, no other country is taking an approach to helping drive private-sector innovation and competitiveness that's as holistic and comprehensive as ours. Piecemeal approaches don't work. You can't focus just on innovation or software or cycles. You have to move all the elements forward together. As the Council's HPC Advisory Committee said, it's a hard problem but we know of no other way of solving it.

HPCwire: Can you tell me more about the HPC Advisory Committee's role vis-a-vis the NICE initiative?

Graybill: We view the HPC Advisory Committee as a brain trust from both the business and technical perspective. We want them engaged at the business level, and we would like the key technical people from their organizations to be involved actively in our workshops and other activities.

HPCwire: In his State of the Union address, President Bush proposed substantially increasing funding for supercomputing and basic science. Work done by the Council and by groups like HECRTF that you, Bob, were heavily involved with, helped make HPC a higher priority for Congress and the Administration. To what extent would this increased funding help the private sector?

Graybill: The increased investment proposed in the President's American Competitiveness Initiative for basic research in the physical sciences and engineering will help enhance U.S. innovation capacity and stimulate the breakthroughs that drive new product development, economic growth and competitiveness. We need these investments because our competition is not standing still. Other countries have also recognized the linkages between increased innovation and competitive gain, and are making their own investments. If we stand still, we will fall behind. The HPCS program has done a great job of focusing on productivity and on sustained performance. In the future, it is critical that the U.S focus less on peak flops and more on the whole HPC ecosystem in order to accelerate private and public sector innovation.

Tichenor: The government investment in HPC helps the business sector in several ways. First, programs like HPCS provide critical cost-sharing opportunities to advance HPC R&D. When you have a market as small as HPC, it's hard for vendors to garner enough revenue to make these major R&D investments alone. And these investments are very important because high-end users have unsolved problems that require us to keep pushing the technology envelope. Petascale problems aren't limited to government scientific research. We've published a number of case studies that confirm that industry also has problems needing petascale computing. They exist in the oil industry, the automotive and aerospace industries, and elsewhere.

Second, the government's investment in purchasing HPC systems is also crucial. It not only helps the government meet its mission-critical requirements, it also provides an important revenue stream to the HPC vendors so they can invest more in R&D for future-generation HPC systems. Additionally, the government is usually the first to purchase the most advanced systems. As aggressive users with highly complex problems to solve, they push and prove out the technology, providing valuable information back to the hardware and software developers. The developers use this information to make more useable and affordable products, enabling wider adoption of this technology across the private sector. This in turn helps to grow the market and increase our competitiveness. And the healthy cycle continues.

HPCwire: To what extent do businesses have access to the really big government systems?

Tichenor: More and more. Last year, for example, DOE's INCITE program was opened up for the first time to participation by the business community. This program is extremely valuable, because it provides access to some of the most powerful supercomputers in the country, systems that industry cannot afford to purchase at this time. Four companies, in addition to a number of universities, passed the rigorous DOE selection process and received large allocations of time on these advanced DOE systems. INCITE really helps the U.S. to leverage some of its largest HPC assets for an additional competitive lift to the country.

HPCwire: This may sound like a softball question, but why is it important for the government and academic community to learn about the HPC needs of businesses?

Graybill: They have more in common than they sometimes realize. It takes much too long for an initial idea to get developed by a university, picked up by a lab and eventually enter industry. To reduce the cycle time, we need to get all these parties engaged in a collaborative environment.

Tichenor: The HPC needs of government, academia and business are interrelated and interdependent. All of these parties are ultimately in this together, so they need to share perspectives and progress. Businesses are often solving problems that are similar to those that government researchers are tackling, so there are opportunities to cross-pollinate. Government and university researchers have lessons they can share with business about doing high-end work in a research environment, and businesses can share some things with government and academic people about applying research in real-world production environments. As Bob mentioned earlier, our national security is inextricably linked to our economic strength. Increasingly, our economic strength will be tied to our ability to out-innovate and out-compute, and HPC is critical for this.

HPCwire: There is a general perception that the U.S. is globally dominant in HPC technology and applications. What is your assessment of U.S. HPC competitiveness today? Are we in danger of falling behind other countries?

Graybill: The U.S. is the overall leader today, but the recent example of the Earth Simulator, and ambitious petascale plans by several nations, remind us that we need to remain vigilant and committed to HPC leadership. It's very encouraging to see U.S. petascale initiatives proceeding at DARPA HPCS, DOE and the NSF. None of these initiatives is aimed at advancing HPC technology for its own sake, however. The goal is to improve our ability to solve problems for government, science and industry. In that context, as Suzy mentioned, we as a nation still face serious challenges in HPC applications software, renewing the talent stream, etc. The NICE ecosystem will help keep HPC technology and human resources aligned with the problems, so the U.S. can remain a leader in both HPC technology and problem solving.

Tichenor: As Bob mentioned earlier, standing still is falling behind. And Japan not only has aggressive development plans, they also recognize the importance of using HPC across industry. Supercomputing is at the top of the list of that country's top 10 science goals, and government documents indicate the linkage between these goals and the international competitiveness of its industries.

This is an important indicator that the competitiveness game is changing. It's not enough to just make the most powerful computers. Competitive advantage comes from using them to solve complex industrial problems that permit companies to achieve and maintain leadership in the highly competitive global marketplace.

HPCwire: On September 7, the Council will hold its third annual HPC Users Conference in Washington. What will you focus on this year?

Tichenor: This year's conference will be particularly interesting because it will set the stage for the next phase of the Council's HPC work. The conference will move this work from a research-and-planning mode to an implementation mode. It will really set the stage for the NICE initiative. The dialogue won't be as much about what the problems are, as about the path forward, about the pilot programs and investment programs and the best business models that are needed to keep HPC healthy in the U.S. We'll talk about the roles of the public and private sectors, and how to make HPC a win-win for the country.

We're going to look at the collaborative models that have worked well and at where adjustments need to be made. We need to leverage the country's tremendous investment in HPC facilities and expertise, and look at how to drive HPC more broadly across the private sector.

As we've tried to do in past, we'll bring people to the conference that you don't typically hear speak at HPC conferences. These people have important insights into how we can best leverage this technology for competitive advantage.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results released this week by Hyperion Research at SC19 in Denver, Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather and climate models struggle to run efficiently in their HPC en Read more…

By Oliver Peckham

Microsoft, Nvidia Launch Cloud HPC Service

November 20, 2019

Nvidia and Microsoft have joined forces to offer a cloud HPC capability based on the GPU vendor’s V100 Tensor Core chips linked via an InfiniBand network scaling up to 800 graphics processors. The partners announced Read more…

By George Leopold

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU-accelerated computing. In recent years, AI has joined the s Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

SC19 Student Cluster Competition: Know Your Teams

November 19, 2019

I’m typing this live from Denver, the location of the 2019 Student Cluster Competition… and, oh yeah, the annual SC conference too. The attendance this year should be north of 13,000 people, with the majority attende Read more…

By Dan Olds

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results r Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather an Read more…

By Oliver Peckham

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This