Appro CEO Shares HPC Vision

By Tiffany Trader (HPC)

November 6, 2008

In this interview, Appro CEO Daniel Kim describes how Appro has been addressing the needs of high-performance computing customers worldwide to do more with less. He also provides a glimpse into Appro’s vision and opportunities for its supercomputer and high-performance cluster solutions.

Appro has announced a number of high-profile customer wins, particularly with the three U.S. National Labs. Can you recap some of these wins and explain their significance?

Daniel Kim: Appro has been working with Lawrence Livermore National Laboratories (LLNL) for about four years. LLNL is one of Appro’s top customers for supercomputing projects. Appro has installed several cluster solutions at their site, reaching an approximate total peak performance of 480 Teraflops of supercomputing power.  These Appro clusters represent the largest Linux clusters installed at LLNL today.

More recently, we have been working with all three National Nuclear Security Administration (NNSA) National Labs — Lawrence Livermore plus Sandia and Los Alamos — as part of the Tri-Lab Linux Capacity Cluster (TLCC) program. Under this TLCC subcontract, Appro has provided nine clusters of various sizes ranging from 144 nodes to 1,152 nodes with aggregate peak performance of 620 Teraflops and nearly 97 aggregate Terabytes of memory. These clusters are being used in the Advanced Simulation and Computing (ASC) program and the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship program. This Appro contract is significant because it marks the first time that the three laboratories have teamed up to purchase and deploy a number of systems sharing a single architecture design.

Also this year, Appro delivered a 95 Teraflop Appro Xtreme-X™ Supercomputer to the Center for Computational Sciences at the University of Tsukuba, one of Japan’s leading academic research institutions. By the way, the Xtreme-X supercomputer is now listed as the second-fastest system in Japan.

Finally, Appro has also completed the delivery of a 38 Teraflop Xtreme-X Supercomputer to upgrade the compute infrastructure of Renault’s Formula One team. The powerful Xtreme-X2 system is installed at the brand-new ING Renault Formula One (F1) Computation Fluid Dynamics (CFD) Centre in the U.K. The supercomputer is being used by the Renault F1 Team to run full-car simulations for their 2009 racing cars.

These multimillion-dollar agreements show that Appro can design, integrate and deliver supercomputing solutions based upon customers’ demanding individual requirements.

Why do you think Appro is finding such success with these and other new customers?

DK: Appro is applying its experience and expertise to design industry-leading supercomputing solutions that enable customers to do more with less, with a reliable and scalable infrastructure that offers a good return on investment.

Appro offers performance, reliability, flexibility and choice, so customers can select high-value supercomputing solutions that meet their technical requirements. We deliver open product architectures that improve bandwidth, performance scalability, reliability and availability while reducing latency and bottlenecks, to ensure the performance of customers’ applications.

Appro’s engineers are focused on making sure that our supercomputing solutions deliver the processing, memory and networking performance needed to accomplish even the most demanding HPC tasks quickly, reliably and affordably. Our optimized infrastructure provides a tremendous amount of added value for customers looking to accelerate their business growth.

What are the benefits of Appro’s Scalable Units design for installations (such as the three U.S. National Labs) with extremely high-capacity computing needs?

DK: Appro’s Scalable Units (SUs) are single hardware design points that enable multiple clusters to be built, based on the same architecture. One of the benefits of Appro’s SU design is that volume purchases of the SUs can achieve economies of scale that rival purchases of a single large system, but with all the flexibility and other advantages of HPC clusters.

Each SU can be used as a highly replicated unit to build clusters of different sizes, depending upon programmatic requirements. Systems ranging in size from 1 SU to 16 SU are possible, although the largest one delivered at LLNL was an 8 SU cluster with 162 Teraflops. Because all the SU-based systems use the same architecture, Appro’s system integration and deployment of these clusters also is simplified.

The Tri-Labs ASC program set an aggressive cost-reduction goal for its program, and Appro’s SU design permitted the economies of scale — in both components and systems — to achieve ASC’s objective. Overall, the ASC program estimated a significant reduction of its total cost of ownership (TCO), thanks to Appro’s flexible, scalable and reliable Linux clusters in conjunction with outstanding project management and customer installation services. In the end, their overall TCO story was very compelling.

Appro has now leveraged the idea of the scalable unit for all current and new supercomputing cluster solutions, including the Xtreme-X supercomputer and its scalable software management package. We refer to the entire SU-based architecture as the Scalable Supercomputing Cluster Architecture.

What kind of customers need or want Appro’s Scalable Supercomputing Cluster Architecture?

DK: Our customers are scientists, engineers and financial analysts who perform highly computational or data-intensive tasks and who need supercomputing resources to power their scientific research discovery, data modeling or seismic research. Therefore, our customers require high performance, high reliability and excellent system management for a wide range of HPC applications. Specific applications include computational fluid dynamics, computer-aided engineering simulations, petroleum exploration and production, scientific visualization for oil discovery and recovery, and research in seismic, defense and classified projects.

The goal of greater performance at a lower cost has thrust clusters to the forefront of HPC. Still, HPC users are demanding ever-better reliability, availability, manageability, compatibility and power efficiency. Responding to these demands, Appro’s Xtreme-X Supercomputer heralded a brand new architecture for HPC: the Scalable Supercomputing Cluster Architecture. This architecture groups clusters together using the scalable unit design to make a unified, fully integrated system that can be provisioned and managed as a stand-alone supercomputer. We feel that the Appro Xtreme-X Supercomputer and our Scalable Supercomputing Cluster Architecture address our customers’ needs for highly computational and data-intensive supercomputing.

Explain some more about the Appro Xtreme-X Supercomputer series. Why is this product line important?

DK: The Xtreme-X Supercomputer Series shows that Appro has paid close attention to current HPC user “pain points” and the evolving requirements of the technical server market. The Appro Xtreme-X series is ideal to scale out datacenters from 2.7 Teraflops to over 1,000+  Teraflops of computing power. It delivers significantly reduced TCO, energy-efficient (green) architecture and a complete lights-out management system to meet the demands of scalable performance and high-availability features.

The Xtreme-X Supercomputer product line is important because it is aligned with the demands of the HPC market today, and it will scale to continue meeting evolving needs in the future. HPC has now moved well beyond its origins in large government and university research sites. Today, HPC is indispensable for large commercial firms and is quickly moving into small and medium-sized enterprises (SMEs) across a broad range of vertical markets. The rise of open and standards-based HPC clusters has made this transformational technology accessible to even small firms and workgroups. Also, for the first time we see information on HPC market trends, data and examples showing how HPC is being used today in sites ranging from leading government and university centers to business and industrial firms, to produce faster, superior innovation and solutions. HPC is a growing market that business of all sizes will adopt so they can compete and survive.

Today, the Appro Xtreme-X Supercomputer Series offers the Xtreme-X1 model, based on dual-socket, quad-core Intel® Xeon® processors, and the Xtreme-X2 model, which supports quad-core AMD Opteron™ processors. Both models are based on Appro’s Scalable Supercomputing Cluster Architecture.

Appro intends to continue providing the highly reliable supercomputing platform and added-value features required by HPC customers, while accelerating businesses’ competitive advantage and reducing their TCO.

Appro recently announced the first demonstration of 40 Gb/s InfiniBand supercomputing clusters. What does this mean for the HPC market, and what kinds of customers will benefit?

DK: 40 Gb/s Infiniband supercomputing clusters address a critical need for faster, low-latency and larger bandwidth for large-scale deployments, with the ability to use any standard PCI Express Adapter available today.

Today, server and storage systems are deploying multiple multi-core processors. In these systems, overall platform efficiency and CPU and memory utilization depend increasingly on interconnect bandwidth and latency. For optimal performance, platforms with several multi-core processors can require interconnect bandwidths of more than 10 Gb/s or even 20 Gb/s. Supercomputers that can deliver 40 Gb/s bandwidth and lower latency – helping to ensure that no CPU cycles are wasted due to interconnect bottlenecks — will deliver unparalleled performance for the most demanding applications. They will take HPC to an even higher level of performance.

All HPC applications will benefit from this technology, including bioscience and drug research, data mining, digital rendering, electronic design automation, fluid dynamics and weather analysis. These applications require the highest throughput, to support the I/O requirements of the multiple processes they use for accessing large datasets to compute and store results. All these HPC applications are ideal for 40 Gbs supercomputing clusters.

What tools and solutions does Appro provide to aid in the management of these complex HPC systems?

DK: We offer the Appro Cluster Engine™ (ACE) management software, which features a complete lights-out management solution. This software suite provides a Web-based management interface that is easy to use, making it possible to control the Appro Xtreme-X supercomputer from any location.

The management modules include Network Management, Server Management, Cluster Management and Storage Management. In addition, the ACE software supports diskless configuration and network failover to achieve maximum reliability, performance and high availability. It supports root file systems offering instant provisioning for rapid, standard Linux installs on large diskless systems, allowing them to boot 64 to 6,400 blades at the same time. The Appro Cluster Engine management software offers reliable, available and serviceable (RAS) features in a total software management package.

Appro recently decided to partner with NEC. What does this partnership mean for the two companies and for HPC customers worldwide?

DK: Appro and NEC have been negotiating a partnership agreement for a while. NEC addresses the needs of a wide variety of HPC organizations with its dedicated Europe, Middle East and Africa (EMEA) HPC channel division. For our part, Appro would like to expand our channel geographically. With NEC’s strong technology base in EMEA and Appro’s cluster deployment successes in the HPC market, this partnership provides a sustainable competitive advantage enabling both companies to take a greater share of this growing market segment.

Appro’s Xtreme-X Supercomputer and Appro Cluster Engine management software are now being added and branded as part of NEC’s HPC solution offering in EMEA. Appro will continue to focus its sales efforts in the U.S., but having NEC as a strategic partner marks a breakthrough for Appro’s supercomputers entry into the EMEA HPC market. The partnership will enable Appro and NEC to work together toward a common goal, focusing on reducing complexity of technology integration when deploying and managing integrated solutions – and lowering customers’ TCO.

Are you concerned that the NEC partnership, in which Appro’s core products are marketed as part of NEC’s HPC offerings in the EMEA region, will dilute Appro’s brand awareness or strength?

DK: Instead of viewing this relationship as a dilution of Appro’s brand — which is very strong in the U.S., where we’ll continue to market products under the Appro name — I see this partnership as an extension of Appro’s reach into EMEA HPC markets. From NEC’s point of view, they now can offer flexibility and choice for customers to select high-value supercomputing solutions with reduced TCO.

As a side note, we are happy to announce that Appro has provided a benchmark cluster that is now up and running at the NEC facility in Houston, where most of the benchmarks will take place for EMEA customer opportunities. The arrangement is very convenient for both parties, since Appro also has a sales and service office in Houston. The teams are taking advantage of their proximity to each other to work closely together.

What are some new HPC technology or product directions we should be anticipating from Appro?

DK: The benefits of HPC solutions include higher performance and scalability — and to a lesser extent, investment protection and simplified administration. Appro will continue to deliver cutting-edge HPC technology and solutions to our customers.

We believe that the Xtreme-X Supercomputer will have a huge impact on HPC markets because it addresses most of our customers’ key pain points. Appro is also looking to offer a new line of server solutions early next year that will support the latest x86 processors as well as GPU computing. We are proud of our up-to-date Xtreme-X Supercomputer deliverables and accomplishments this year, and we look forward to further enhancing our overall high-performance computing product offerings and extending our value proposition.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers


10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This