Visit additional Tabor Communication Publications
June 16, 2009
French-owned computer maker Bull has unveiled a new family of HPC servers based on a novel blade architecture. Branded as "bullx," for extreme computing, the blades are designed for speed, density, energy efficiency and ease of management. The new offering will take over the HPC mantle from Bull's NovaScale servers, which will continue to be sold to enterprise customers for more standard computing workloads.
Bull has been building its HPC capabilities for the last few years, and in the past 18 months has acquired two companies, Serviware and science + computing ag (s+c), to add to its portfolio. Paris-based Serviware brought its integration expertise in deploying complex cluster systems, while Stuttgart-based s+c contributes its ability to help customers manage complex HPC infrastructure. Now, with bullx, the company has a purpose-built HPC architecture to distinguish itself from commodity cluster vendors.
Unlike many other blade-based architectures, which are designed to handle both enterprise and HPC workloads, Bull built the new servers specifically with high performance computing in mind. "This system was designed from the start to support HPC applications...with no compromise on performance," said Fabio Gallo, vice president of the extreme computing business unit at Bull. The architecture is meant to scale from single-chassis systems all the way up to top-of-the-line supercomputers, where just a 100 bullx racks can deliver a petaflop of computing horsepower.
The bullx blades come in two flavors: CPU-only and GPU-accelerated. Both versions are based on dual-socket Nehalem EP (Xeon 5500) nodes, but the accelerator blades include up to two NVIDIA Tesla M1060 GPUs on board. CPU-only and CPU-GPU blades can be mixed within a system, but only the CPU blade is currently available. Bull plans to launch the GPU-equipped version in November.
The basic building block for a bullx system is a 7U 18-blade chassis. A CPU-only configuration delivers up to 1.7 teraflops and an entire 42U rack will yield 10 teraflops (108 nodes of dual-socket, quad-core). The interconnects are managed through the backplane, so there are no external cables save for the power supplies. An optional 36-port InfiniBand switch can be slotted into the chassis for cluster connectivity.
The Bull engineers maxed out on just about every system component for a dual-socket set-up, especially I/O. A single node incorporates two Intel Tylersburg chipsets, which each provide a PCIe x16 and PCIe x8 interface. This enables each node to drive up to two QDR InfiniBand on-board ConnectX chips as well as two GPU accelerators. The ability to support dual-on-board QDR and dual-on-board GPUs is probably the most distinguishing hardware feature of the bullx design, and makes the servers one of the most advanced being sold into the HPC market.
The memory subsystem is also high-end. Each socket supports three channels of DDR3 memory for a total of six, and up to 12 DIMMs can be loaded on each node, for a maximum memory capacity of 96 GB (using 8GB DIMMs).
A Bull-engineered ultra capacitor module (UCM) can also be included with each chassis to protect the system from 250ms power brown-outs. The UCM also eliminates the requirement for UPS on each individual node. By doing this, the module will save 15 percent in power costs, according to Bull. A water-cooled rack door can be installed to save even more power. Bull estimates water cooling will save about 75 percent of the cooling costs, which can be a third to a half of the total power consumption for a typical system. Racks can be air cooled as well, but for denser configurations (and especially when GPU accelerators are present) water cooling is going to be the way to go.
On the system software side, bullx comes with a Linux-based cluster suite, derived mainly from open source components (although Microsoft Windows HPC Server is also an option). The bullx cluster suite provides the usual job scheduling and resource management, libraries, Lustre file system support, and interconnect access. It also offers installation/configuration support as well as cluster diagnostics, monitoring and control. The cluster suite was designed for fast installation and updates, and provides seamless management of systems using a mixture of CPU-only and GPU-accelerated nodes.
Bull is positioning the new HPC blades at the high end of the HPC market, covering the top third of the departmental HPC segment, and all of the divisional and supercomputing segments. At this point, the company is confining most of its efforts to Western Europe, where it's already made inroads selling HPC systems to companies like Airbus, Daussalt Aviation, Total, CEA, and the Jülich Supercomputing Center, among others. Bull estimates revenue in the segments it is going after at EUR1.2 billion in 2009, growing to EUR1.6 billion in 2013. The company's goal is to grab 10 percent of that market. "We definitely think we're on track in reaching that objective," said Bruno Pinna, Bull's group marketing director.
Bull is also targeting HPC opportunities in South America and Africa, and will be looking to expand its presence there and in other emerging markets. But for the time being, the company appears reluctant to expand into North America and tackle the likes of IBM, HP, and Dell on their home turf.
Two customers have already signed up for bullx machines: CEA, the French authority for nuclear energy, and the University of Cologne in Germany. Both are CPU-only deployments, although CEA recently completed installation of a 300-teraflop Bull supercomputer based on NovaScale servers and NVIDIA GPUs.
Pricing on the new gear is not publicly available, although Bull says the new bullx systems will generally cost from a few tens of thousands of Euros for departmental, single chassis configurations, to several tens of millions of Euros for petascale-class systems.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.