The New Frontier: HPC in Enterprise Applications

By Ed Turkel

July 21, 2006

To the once-rarified list of human activities to which HPC has been frequently applied — predicting the weather, simulating chemical reactions, sequencing genomes and so on — one can now add business applications. This is particularly true in industries like aerospace and finance, which are heavily dependent on science, math and engineering, but HPC is turning up in less-expected places, like retail and insurance. As time goes on and HPC hardware gets less expensive, more powerful and easier to implement, the list of commercial uses for cluster-based supercomputing will expand as fast as business decision makers can find ways to translate it into a business advantage.

While off-the-shelf HPC has been widely available for several years, pundits have energetically debated the merits of the various technologies for just as long. Now it appears that just as monolithic, custom-built supercomputing towers have been largely outpaced by parallel cluster systems in universities and government labs, Itanium 2-based servers are an increasingly competitive choice for many of today’s enterprise HPC needs.

Some of the reasons for Itanium 2-based systems’ growing share of the enterprise HPC market include scalability, flexibility, low TCO and greater flexibility in terms of operating systems and software providers compared to RISC-based systems. As Itanium 2-based microprocessors continue to make inroads into uncharted business territories, the very definition of what constitutes HPC will shift.

On the Ground

Companies with a basis in engineering are the first places one might think to look for cutting-edge uses of 64-bit HPC to meet enterprise challenges, as these industries represent a bridge between “traditional” realms of supercomputing and more contemporary commercial applications. One such area where Itanium 2-based solutions are leading the pack is in auto manufacturing, where the ability to accurately simulate real-world conditions saves time and money and can result in a palpable competitive advantage.

Computer-aided engineering (CAE) has revolutionized the auto industry. Historically, the manufacture of a new car model could require as many as 60 or more physical prototypes for design, development and crash tests, and each prototype can cost up to $500,000 — adding up to tens of millions of dollars in manufacturing costs before a single car is sold.

In the past decade, the increasing sophistication of mathematical simulations made possible by HPC has rapidly reduced the number of prototypes required before rolling out a new model to almost zero, resulting in substantially shortened production cycles for those manufacturers able to implement the technology most effectively.

Simulations are by their very nature memory intensive; the more data points that can be included, the more accurately the simulation maps to the physical world and the more precise the result will be. This is why 64-bit systems are ideal for CAE. With vastly superior on-board memory caching and I/O systems designed to deal with larger data volumes, servers based on the Intel Itanium 2 processor can provide faster, more accurate calculations at a lower price point than comparable RISC-based systems.

HPC also enables a greater ability to deliver innovation, particularly important in the luxury auto sector. For high-end car buyers, the smallest feature gains can make the difference between one brand and another. More accurate simulations result in improved functioning in the final product: less vibration, better performance in diverse conditions, more safety — even improvements in climate control and the ability to de-ice windows are made possible by HPC.

In addition to the performance successes of Itanium 2-based systems for CAE, the approach computing vendors are taking to the overall HPC marketplace makes a big difference for these customers. Instead of trying to maintain a monopoly on its HPC solutions, Intel has partnered with as many different vendors as possible to provide the widest range of solutions to the users of Itanium 2-based systems, ensuring that the most sophisticated simulation tools will always be available to users of Itanium 2-based servers and clusters. The Itanium Solutions Alliance is one resource available for users and developers of Itanium-based systems, and offers programs, tools, workshops and directories that inform users of the variety of options available with Itanium 2-based solutions and also enables implementation.

Users of Itanium 2-based solutions also have greater freedom in choosing operating systems, including Microsoft Windows, HP-UX, Unix and Linux and can run multiple operating systems simultaneously. This makes is it easy for carmakers to upgrade underperforming legacy systems with less downtime. In fact, eight of the top 10 car manufacturers have deployed Itanium 2-based systems, demonstrating that cost-effective HPC is critical for success.

In the Air

In the field of aeronautics, the absolute need for physical testing puts the dream of zero-prototype production much further from reality than in auto manufacturing. On the whole, however, the power of HPC to run accurate simulations may be even more critical to success for aerospace companies, as the cost of building multiple prototypes has always been prohibitively high.

They also have particular needs that have not always been fully met by past HPC solutions. Because a given aircraft can remain in use for decades, the ability to duplicate analyses with absolute precision years removed from the initial design process is required for maintenance over the life of the product. The need to reproduce mathematical results and use hardware and software systems that will continue to be supported is key not only to meeting FAA and JAA standards but to keep customers happy over the long term.

While RISC-based systems can be effectively tuned to meet the computing challenges of a given aerospace company, on a RISC-based platform the need for longevity could, in theory, bind such a company to one HPC solution provider for decades regardless of the cost or quality of the relationship.

Itanium 2-based servers are an appealing choice in this regard because they are based on industry standards, meaning they are supported by a wider community of users and developers, each with a stake in the continued viability of the architecture as well as a commitment to expanding its capabilities. For example, along with Intel, the other founders of the Itanium Solutions Alliance — Bull, Fujitsu, Fujitsu Siemens Computers, Hitachi, HP, NEC, SGI and Unisys — all provide Itanium-based solutions. This advantage is enhanced by the fact that Itanium 2-based systems support a wider variety of operating systems than any other HPC solution, providing the ability to keep legacy applications running intact for the long term without necessarily having to re-port at every turn.

These facts beget yet another reason for aerospace companies to make use of Itanium 2-based HPC clusters: as the industry is highly concentrated in a few key players, yet has a high need for specialized software, makers of aircraft tend to write an extraordinary number of applications in-house.

While it would seem that customized or RISC-based hardware would offer the greatest opportunity for such customization, in many cases the opposite is true. With Itanium 2-based servers, standards-based hardware and true open-source OS options, IT staff are able to write, deploy, optimize and modify custom software solutions more easily than in cases where systems are designed and controlled by RISC-based computing vendors with a stake in controlling access to their technology.

Under the Earth

Petroleum geology is yet another industry benefiting from advances in cluster computing, as it relies on the analysis of large seismic datasets to image underground reserves and calculate drilling locations for maximum output. It is also another business in which supercomputers have been in use for decades but which is now being transformed by cluster-based HPC.

For one thing, the outsourcing of seismic data processing to specialized geophysical service companies is becoming a thing of the past. Energy companies are reaping the cost benefits and efficiencies of bringing data processing in-house with massively-scaled cluster computing solutions. In fact, 44 of the top 500 most powerful supercomputers now belong to energy companies.

With the massive onboard memory and easy scalability of Itanium 2-based systems, geological analyses that used to take months are now performed in weeks or days, allowing petroleum companies to be more nimble and efficient in their drilling practices.

In the past, less powerful processors and the necessity of utilizing tape storage required the data to be sliced into manageable chunks. The efficiency gained from employing Itanium 2-based server clusters lies in the ability of Intel Itanium processors to handle thousands of GB of data as a single job, obviating the need to cut the work up into smaller segments and shuffle those segments back and forth between hard disks and CPUs. The decreasing cost of hard disk memory has also played a strong role in enabling rapid, high-precision earth imaging by reducing reliance on tape-based storage systems.

In addition to overall efficiency and price/performance gains, Itanium 2-based clusters are setting new HPC benchmarks for earth imaging. It is in the nature of seismic petroleum exploration that the more data one can process, the further one can “see” underground. Geophysical images may contain ten billion or more data points. Due to the complex interactions of seismic waves, each of these data points must be calculated based on iterative calculations involving the entire dataset.

The ability of highly scalable, parallel Itanium 2-based servers to process seismic signals reaching out 20 kilometers underground allows petroleum geologists to see massive “earth volumes,” enabling ever-more-precise placement of wells — critically important when drilling and maintenance can cost tens of millions of dollars.

Such vast pictures of subsurface matter were not even possible a decade ago; now they can be executed in a matter of weeks. The competitive advantage — the very necessity — of HPC in the energy sector is clear.

Everywhere Else

EPIC-based 64-bit architectures have also found a natural place in other industries with compute-intensive requirements. In the financial sector, HPC is being applied to economic forecasting and portfolio strategy. Cable operators are using it to deliver rich video-on-demand systems to consumers while media companies utilize the fast memory access and reliability of Itanium 2-based systems to realize the potential of virtualizing their content libraries.

Perhaps the most exciting trend, however, is the application of HPC to industries that lie outside the traditional inner circle of supercomputing. Of course, it is number-crunching that distinguishes HPC from standard memory-intensive business applications. With the ability to store and access vast data stores now a given, businesses have to look elsewhere for a sustainable competitive advantage, and many are blazing the trail with new applications of cluster-based supercomputing to analyze customer behavior, identify business trends and manage global supply chains.

Sophisticated data mining solutions such as neural networks employ statistical calculations more similar to iterative physical modeling techniques than traditional database programming and make excellent use of the features unique to Itanium 2-based systems.

The number of sophisticated analytical tools available to businesses grows in number every year: enterprise resource planning, complex business intelligence systems that need to be seamless and accessible to ordinary users, and the increased ability to forecast changing market conditions are all applications of HPC for the enterprise that will become more prominent in coming years.

HPC itself is being redefined as more and more enterprise users find ways to use hyper-powered computational resources in the marketplace. While government and institutional users continue push the envelope of HPC performance, the list of the world’s most powerful supercomputers may one day be dominated by users such as global retailers, marketing companies and manufacturers of consumer packaged goods. Raw power alone will no longer provide a competitive edge when it is so widely available. The leaders will be those who choose the right technology for their industry and apply it in ways their rivals cannot imagine.

—–

The Itanium Solutions Alliance was formed by leading enterprise and technical solutions providers to work together towards a common objective of transitioning the world of RISC-based computing platforms to open, industry standard solutions based on Intel Itanium 2 architecture. Together with leading enterprise software and hardware providers, the Alliance is dedicated to accelerating the adoption and ongoing development of Itanium 2-based solutions. Its membership comprises some of the most influential companies in the computing industry. Visit www.itaniumsolutionsalliance.org for more information.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This