Dell Unveils Advancements to HPC Portfolio

November 16, 2015

AUSTIN, Tex., Nov. 16 — Dell today unveiled sweeping advancements to its industry-leading high performance computing (HPC) portfolio. These advances include innovative new systems designed to simplify mainstream adoption of HPC and data analytics in research, manufacturing and genomics. Dell also unveiled expansions to its HPC Innovation Lab and showcased next-generation technologies including the Intel Omni-Path Fabric.

HPC is becoming increasingly critical to how organizations of all sizes innovate and compete. Many organizations lack the in-house expertise to configure, build and deploy an HPC system without losing focus on their core science, engineering and analytic missions. As an example, according to the National Center for Manufacturing Sciences, 98 percent of all products will be designed digitally by 2020, yet 95 percent of the center’s 300,000 manufacturing companies have little or no HPC expertise.

“HPC is no longer a tool only for the most sophisticated researchers. We’re taking what we’ve learned from working with some of the most advanced, sophisticated universities and research institutions and customizing that for delivery to mainstream enterprises,” said Jim Ganthier, vice president and general manager, Engineered Solutions and Cloud, Dell. “As the leading provider of systems in this space, Dell continues to break down barriers and democratize HPC. We’re seeing customers in even more industry verticals embrace its power.”

Dell Accelerating Mainstream Adoption of HPC

Dell announced the new Dell HPC System Portfolio, a family of HPC and data analytics solutions combining the flexibility of custom systems with the simplicity, reliability and value of a preconfigured, factory-built system that includes:

  • Simplified design, configuration, and ordering in a matter of hours instead of weeks;
  • Domain-specific design that’s designed and tuned by Dell engineers and domain experts for specific science, engineering and analytics workloads using flexible industry-standard building blocks; and
  • Fully tested and validated systems by Dell engineering with a single point of hardware support and a wide range of additional service options.

New application-specific Dell HPC System Portfolio offerings include:

  • Dell HPC System for Genomic Data Analysis is designed to meet the needs of genomic research organizations to enable cost-effective bioinformatics centers delivering results and identifying treatments in clinically relevant timeframes while maintaining compliance and protecting confidential data. The platform is a result of key learnings from Dell’s relationship with Translational Genomics Research Institute (TGen) to help clinical researchers and doctors expand the reach and impact of the world’s first Food and Drug Administration-approved precision medicine trial for pediatric cancer. TGen has been able to improve outcomes for more patients by creating targeted treatments at least one week faster than they could be accomplished previously.
  • Dell HPC System for Manufacturing is designed for customers running complex manufacturing design simulations using workstations, clusters or both. Applicable use cases include Finite Element Analysis for structural analysis using ANSYS Mechanical & Computational Fluid Dynamics for predicting fluid behavior in designs utilizing ANSYS Fluent or CD-adapco STAR-CCM+.
  • Dell HPC System for Research is designed as a foundation, or reference architecture, for baseline research systems and numerous applications involving complex scientific analysis. This standard cluster configuration can be used as a starting point for Dell’s customers and systems engineers to quickly develop research systems that match the unique needs of research customers requiring systems for a wide variety of research agendas.

Dell Accelerating HPC Technology Innovation and Partnerships

Dell also showcased new investments in capabilities, partnerships, programs and technologies designed to chart a course for advancing innovation from the desktop to petaflops with future-ready systems.

Dell announced a new expansion of its Dell HPC Innovation Lab in cooperation with Intel specifically for support of its Intel Scalable System Framework. This multi-million dollar expansion to the Austin, Texas, facility includes additional domain expertise, infrastructure and technologists. The lab is designed to unlock the capabilities and commercialize the benefits of advanced processing, network and storage technologies as well as enable open standards across the industry.

Dell and Intel Partnership

Beyond becoming the first major original equipment manufacturer (OEM) to join the Intel Fabric Builders program, Dell is working closely with Intel to support its Intel Scalable System Framework, which includes Intel Omni-Path Fabric technology, next-generation Intel Xeon processors, the Intel Xeon Phi processor family, and the Intel Enterprise Edition for Lustre. Announcements include:

  • New Dell Networking H-Series switches and adapters for PowerEdge servers featuring the Intel Omni-Path Architecture. These provide a next-generation fabric technology designed for HPC deployments. The architecture includes advanced features such as traffic flow optimization, packet integrity protection and dynamic lane scaling allowing for finer-grained control on the fabric level to enable high resiliency, high performance and optimized traffic movement.
  • Dell and Intel support for the Linux Foundation’s OpenHPC community. The community is designed to provide a common platform on which end-users can collaborate and innovate to simplify the complexity of installation, configuration and ongoing maintenance of implementing a custom software stack and easing a path to exascale.
  • Dell will showcase many components of the Intel Scalable System Framework including Intel Omni-Path Architecture, Intel Enterprise Edition of Lustre, and Intel Xeon Phi processor family. In addition, Dell is hosting numerous confidential advisory sessions with customers at the show gathering insights to help optimize its implementation of systems using next-generation Intel Xeon Phi.

“We’re excited to collaborate with Dell to bring advanced systems to market early next year using the Intel® Scalable System Framework,” said Charles Wuischpard, vice president and general manager of HPC Platform Group at Intel. “Dell’s position as our largest and fastest-growing customer for Intel Enterprise Edition for Lustre, their work on Omni-Path Architecture and next-generation Intel® Xeon Phi™, and their initiatives to expand the Dell Innovation Lab demonstrate their commitment to rapidly expanding the ecosystem for HPC.”

Dell and Mellanox Partnership

Dell and Mellanox Technologies have a long history of collaboration and leadership in the HPC community. Together they have already published numerous best industry practices and application case studies with the HPC Advisory Council demonstrating superior application scalability and performance. Dell and Mellanox have been contributing HPC clusters to the HPC Advisory Council for several years, enabling the HPC community with best-in-class systems for application optimizations and overall HPC outreach and education.

Dell and Mellanox announced additional investment in Dell’s existing HPC Innovation Lab to provide an end-to-end EDR 100Gb/s InfiniBand supercomputer system. The system is designed to showcase extreme scalability by leveraging the offloading capabilities and advanced acceleration engines of the Mellanox interconnect as well as provide application specific benchmarking, and characterizations for customers and partners.

“With this new investment, Dell’s HPC Innovation Lab will now enable new levels of applications efficiency and innovative research capabilities. Together we will help build the solutions of the future,” said Gilad Shainer, vice president of marketing, Mellanox Technologies.

Customer and Community Momentum

Dell announced today that it is continuing to deploy HPC solutions across the globe to help organizations drive scientific advancement as well as economic and global competitiveness. HPC is high on many countries’ national agenda because it is vital to interests such as science, industrial productivity, climate change, energy, security and other industry verticals. Additionally, these HPC initiatives are increasingly converging with big data and cloud initiatives.

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego recently launched its new Comet petascale supercomputer powered by Dell PowerEdge C6320 servers. Comet is designed for modest-sized projects such as economics, genomics and social sciences, which represent a great amount of research and potential scientific impact.

“Comet provides ‘HPC for the 99 percent’—serving as a gateway to discovery for a much larger research community so we needed a solid hardware foundation,” said Michael Norman, SDSC director and principal investigator for the Comet project. “We chose the Dell PowerEdge C6320s because of Dell’s reputation in the HPC space, its leading hardware design and innovations, and its ease of deployment. We’re excited to be working with Dell to help accelerate discovery by expanding access to researchers who have not traditionally relied on supercomputers.”

The Texas Advanced Computing Center at The University of Texas at Austin provides comprehensive advanced computing resources and support services to researchers in Texas and across the U.S. The center specializes in high-performance computing, scientific visualization, data analysis and storage systems, software, research and development and portal interfaces.

“The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies,” said Tommy Minyard, director of Advanced Computing Systems at TACC. “Dell technology and support is integral to the core of several of our supercomputing clusters, including Stampede and Wrangler. With Dell, we can push the envelope of computational capabilities, enabling breakthroughs never before imagined.”

Two years after the HiPerGator supercomputer was introduced at the University of Florida, it is now being expanded to add capacity and capabilities with 30,000 cores in approximately 1,000 nodes made by Dell. HiPerGator performs complex calculations and data analyses for researchers to find life-saving drugs, make decades-long weather forecasts and improve armor for troops.

“The adoption of HiPerGator by the university community has been rapid and across all disciplines, making it clear that an expansion of capacity would be needed to meet current demand and expected growth,” said Dr. Erik Deumens, Director of UF Research Computing, University of Florida. “This expansion is a huge undertaking and we are working with Dell to give our researchers’ high-impact research projects a competitive edge with faster processing.”

The Centre for High Performance Computing (CHPC) in Cape Town, South Africa is currently in the process of upgrading its system to provide more simulation and data-centric science capabilities. For this upgrade, the Center had four constraints: physical data center space, power, cooling and budget.

“The key aims of the petascale system are to support and enable CHPC to remain globally competitive while accelerating Africa’s socio-economic uplift,” said Dr. Happy Sithole, Director, Centre for High Performance Computing. “The Dell solution was able to meet the performance requirement within our constraints as well as provide a roadmap for further scale – scalability and flexibility are key tenets of the system’s design.”

Jetstream, scheduled to enter production in January 2016 at Indiana University, is a new and creative approach to delivering computational resources to an increasingly diverse community of scientific research and education. A user-friendly cloud environment, Jetstream is designed to give researchers access to interactive computing and data analysis resources on demand, whenever and wherever they want to analyze their data.

“Jetstream is a first-of-its-kind cloud environment aimed for everyday use by practicing scientists. Jetstream will bring to XSEDE and the national research community a user-friendly cloud environment that allows researchers to analyze their data now – whenever now is for that researcher,” said Craig Stewart, Executive Director, Indiana University Pervasive Technology Institute and Associate Dean, Research Technologies, Indiana University. “With Dell hardware at the core, we can provide interactive computing and data analysis resources for science and engineering research across all areas of National Science Foundation-supported activity.”

Availability

  • The Dell HPC System for Genomic Data Analysis is available today.
  • The Dell HPC Systems for Manufacturing and Research will be available in early 2016.
  • The Dell Networking H-series switches, adapters and software based on the Intel Omni-Path Architecture will be available in the first half of 2016.

About Dell

Dell Inc. listens to customers and delivers innovative technology and services that give them the power to do more. For more information, visit www.dell.com.

Source: Dell

editorialfeature

 

 

 

 

http://www.hpcwire.com/2015-supercomputing-conference/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire