RAID Selected as North American Gold Partner for BeeGFS

November 3, 2016

ANDOVER, Mass., Nov. 3 — RAID Inc., a custom technical computing solutions company, today announced it was selected as a North American Gold Partner for the high performance parallel file system BeeGFS, by Fraunhofer HPC spin-off ThinkParQ GmbH. The team behind BeeGFS (formerly FhGFS) looks to this strategic partnership as an opportunity for RAID Inc.to enlighten its portfolio of high performance computing and Big Data infrastructure solutions across diverse markets as Genomics, Drug Discovery, Research, Semiconductors, and Financial Services. RAID Inc. will be exhibiting this parallel cluster file system’s storage performance prowess at Supercomputing Conference (SC16), the HPC community’s flagpole event, in Salt Lake City Nov. 13-18, 2016 at booth 809.

BeeGFS, available as open source, was designed to address I/O intensive workloads with a focus on performance, ease of use, and simplified manageability for high performance computing without presenting a TCO (total cost of ownership) burden. BeeGFS transparently spreads user data across multiple servers; therefore, by increasing the number of servers and disks in the system, IT admins can scale performance and capacity seamlessly from small clusters up to enterprise-class systems with thousands of nodes.

“BeeGFS is a file system dedicated to delivering maximum I/O performance to customers,” said Sven Breuner, CEO of ThinkParQ. “Leveraging their twenty plus years of technical computing experience in solution design, RAID Inc. is positioned to impact the HPC market by deploying all-flash systems with BeeGFS.”

With performance-centric highly available data increasing in demand, a flexible architecture like BeeGFS eliminates data silos and storage complexity with a solution built specifically for multi-tenancy and cloud. BeeGFS is based on a lightweight architecture and can be created on a per job basis with the BeeOND (BeeGFS On Demand) application. This BeeOND feature is designed to provide new instances on the fly across all compute nodes being assigned to a particular job. BeeOND aggregates the performance and capacity of internal SSDs or hard disks in compute nodes for the duration of a compute job to increase performance levels with a very elegant method of burst buffering, which can be highly useful in cloud environments and temporary scratch data scenarios.

“Partnering with ThinkParQ we are able to leverage one of the highest performing file systems for write-dominated workloads,” said Robert Picardi, CEO of RAID Inc. “At RAID Inc. we continue to build on our HPC heritage by introducing all flash technical computing solutions and innovative performance-tuned parallel file systems, BeeGFS allows us to achieve fierce storage performance metrics.”

Incorporating the performance-tuned BeeGFS parallel storage platform into its line of all-flash Fusion servers with dual NVMe drives per node, RAID Inc. can increase performance, IOPS, and data bandwidth efficiency. NVMe flash storage helps to accelerate the included fully scalable metadata architecture and facilitates increased response in environments with a shared single name space for capacity storage. The RAID Inc. Ability EBOD Series creates one of the most cost-effective and storage dense options with 84 drive bays enabling massive capacities up to 840TB per unit—210TB pre rack U height.

RAID Inc. builds upon its HPC legacy with the introduction of the BeeGFS parallel cluster file system platform for distributed applications that need fast access to large amounts of data to achieve industry leading price/IOPS ratios. RAID Inc. HPC and Big Data solutions bring a vendor agnostic approach to disaggregate scale-out platforms and hyperconverged appliances with a architecture that couples compute, networking, and storage designs in an effort to remove data storage and data motion barriers.

Performance-centric entities that seek a holistic approach can leverage a RAID Inc. technical computing solution today, built on the BeeGFS platform empowering data center managers to effortlessly manage a scale-out data center infrastructure. This solution is currently available with engineer-driven guidance and 24×7 concierge support from RAID Inc. for organizations looking to maximize data center efficiency and deliver lower cost TCO in technical computing environments.

About BeeGFS

The BeeGFS parallel file system was developed specifically for performance-critical environments and with a strong focus on easy installation and high flexibility, including converged setups where storage servers are also used for compute jobs. By increasing the number of servers and disks in the system, performance and capacity of the file system can simply be scaled out to the desired level, seamlessly from small clusters up to enterprise-class systems with thousands of nodes. BeeGFS is available free for download from www.beegfs.com, professional support is available from ThinkParQ.Inc.

About RAID Incorporated

RAID Inc. was founded in 1994 to deliver end-to-end performance-driven technical computing and storage solutions. The company has earned industry praise for providing platform agnostic technical guidance in high performance computing (HPC), big data, cloud and software-defined data centers—in the most efficient, reliable and cost effective manner. The world’s leading research facilities, government, life science, financial, healthcare, energy, and cloud service providers can leverage the RAID Inc. team of engineers’ extensive academic, research lab and commercial expertise that make RAID Inc. a trusted industry leader. More information found at www.RAIDinc.com, call +1 (800)330-7335 or comment via @RAIDinc.


Source: RAID

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire