Open Compute Project Celebrates 10th Anniversary with Release of White Paper at OCP China Day

August 5, 2021

SAN JOSE, Calif., Aug. 5, 2021 — Recently, the Open Compute Project (OCP) Foundation celebrated its first decade at the third OCP China Day 2021, hosted by Inspur Information, with nearly a thousand engineers and data center professionals in attendance. As one of the major innovation drivers of data centers of tomorrow, open compute uses innovative global collaboration to solve some of the biggest challenges surrounding the development of sustainable data center infrastructure –energy consumption, high-speed network communication, intelligent operation and maintenance, and circular utilization.

Using the theme of “Open Compute for a New Decade: Decarbonization, Efficiency, Application”,OCP China Day 2021 hosted technology experts from 23 leading technology companies, such as Alibaba, Baidu, Enflame, Intel, Inspur, Seagate and Western Digital. The companies delivered over 50 presentations and shared their innovative achievements of data center infrastructure in the past decade, exploration and application in artificial intelligence (AI), edge computing and other emerging technologies and their application in telecommunication and financial sectors.

The Open Computing White Paper Released

Commemorating the 10th anniversary of the Open Compute Project, Inspur Information joined with Omdia, a world-renowned market research organization, to release the Open Computing White Paper at the summit. According to the white paper, open compute is a brand-new industrial collaboration mode that is most often deployed through three major open compute organizations – OCP, Open Data Center Committee (ODCC) and Open19 – where participants can share products, specifications and IPs of IT infrastructure, and speed the adoption of innovative data center technologies.

As the ecosystem evolves, the open infrastructure is expected to see an increasingly larger market share, with open compute being one of the core drivers of IT infrastructure innovation. Omdia predicts that, 40% of servers worldwide will be based on open standards by 2025. In the meantime, rack server, as the core project of open compute, is expected to be the mainstream form of computing infrastructure of data centers in near-term.

The white paper also shows that consensus is reached through open collaboration by open compute, promoting standardization, ecologicalization and other effective measures to accelerate adoption of innovative technologies. For instance, with edge computing, OCP and ODCC respectively set up project teams of Open Edge and Open Telcom IT Infrastructure (OTII) to facilitate the integration of server and telecommunication specifications.

Similarly with AI, the OCP Accelerator Module (OAM) project launched by OCP helps standardize accelerator modules, simplifies AI infrastructure design, shortens the R&D period of AI co-processing and accelerates the industrialization of design and products by innovative chip companies. Liu Jun, Vice President of Inspur Information, said: “The golden age of system structures driven by AI is almost upon us. Multiple AI chips provide diversified computing capabilities for different requirements and opportunities for chips to be used for multiple purposes. Thanks to the efforts of the wider community, open compute bolsters the diversified computing integration with open standards, tapping into the potential of AI innovation.”

OCP in the Past Decade: Achieve Double Success in Ecosystem and Technology, Break the Boundary with Collaboration

Founded in 2011, the OCP Foundation was the first open compute organization and over the past decade has ushered successes in both ecosystems and technology.

Over the past ten years, the OCP Foundation has grown from an organization created by a few enterprises, into the world’s largest open compute community, boasting over 250 members, around 5000 engineers and more than16,000 participants. From the early days of global open compute, the OCP has been continuously developing while attracting leading member companies from around the world including Alibaba, ARM, Baidu, Facebook, Google, HPE, Intel, Inspur, Microsoft, NVIDIA, Tencent, and others. The OCP has gradually evolved into an industrial ecosystem that supports the standardization of data centers and assist with product development.

Innovative technologies empowered by the OCP can be seen everywhere in data centers, which consistently boosts the construction of green and efficient entities. For instance, the coming Open Rack 3.0 is greatly improved in storage space, loads, power supply and liquid cooling, helping data centers carry out different AI working loads in a larger scale. With regards to high-speed network communication, OCP Mezz (NIC) has been the standard of IO. In addition, the latest NIC 3.0 technical specification is equipped with new features of hot swap and PCIe Gen5, meeting the storage space requirements of high-density deployment by various applications of high-density computing. Meanwhile, OCP is breaking the boundaries of data center infrastructure by extending to heterogeneous computing and edge computing. Currently, OCP has built 23 technology projects in nine different categories, including server, network, storage, hardware management, rack and power, AI, and edge computing.

Steve Helvie, Vice President of Channel Development at OCP, stated: “OCP has achieved incredible success in the past decade, and made remarkable development and progress in the size of community, categories of members, and improvement in the supply chain, serving as one of the most influential open-source communities in the globe. But its most surprising achievement is the changing mindset of suppliers, moving from development behind closed doors to a collaborative and open source approach. The move has eliminated technological barriers to create a new global collaboration mode that addresses global issues like carbon emission and circular economy.

Open Compute for a New Decade: Decarbonization, Efficiency, Adoption

Derived from “carbon emission,” environmental issues like “peak carbon dioxide emissions”, “carbon neutrality” and “dual carbon goals” have been the top priority of all governments and societies. How data centers can become more environment-friendly and efficient for the next generation tops the discussion at the summit.

Rebecca Weekly, Chairperson of OCP and Vice President, General Manager, and Senior Principal Engineer of Hyperscale Strategy and Execution at Intel Corporation, said: “The future of computing is going to be very interesting, as our world is becoming more heterogeneous and desegregated. A large part of computing is being consumed by ICT and cloud services and telecommunications companies consume over 10% of the world electricity for their computing needs. Therefore, we need to cooperate to address the increasingly complicated and large global computing consumption and environment issues.”

Dr. Zhang Weifeng, Chief Scientist of Heterogeneous Computing at Alibaba Cloud Intelligence Infrastructure said that, “In the past decade, we witnessed the exponential growth of computing requirements from data centers. The future challenges for open compute lie in the way to construct a scalable and sustainable infrastructure to satisfy the ever-growing computing demands, so the compute with low power consumption and high efficiency will be a new driver. In the end, energy conservation will help reduce carbon emission, which is in line with the goal of carbon neutrality for most countries in the next two or three decades.”

Compared with traditional designs, open compute enjoys great advantages in lowering power consumption and operation costs. For example, the rack server by Facebook is able to reduce 45% capital expenditure and 24% operation cost while improving energy efficiency by 38%. In ODCC, Baidu saw an enormous increase in energy efficiency and lowered its total cost of ownership (TCO) by 10% when adopting its Scorpio Rack Server. The power usage effectiveness (PUE) of all data centers built and operated by Baidu is no more than 1.3, and the annual average PUE of the latest completed data centers around 1.2.

According to Liu Jun, VP, GM of AI &HPC of Inspur Information, the goal of OCP is in line with China’s development goal of future data centers. At present, China requests the PUE lower than 1.3 for the coming large data centers, and proposes higher technical requirements for green energy conservation, such as high-density integrated IT equipment with high efficiency, efficient cooling system (liquid cooling available) and efficient power supply system (high-voltage direct current available). These new designs enjoy the same significance with OCP innovation projects, mapping a clear path for open compute in the next decade.

Meanwhile, as global telecommunication operators quickly accept open hardware, open software and desegregation, open compute will make further headway into other sectors – financial services, the public sector, traditional medical institutions, etc – over the next decade. As attitudes of users from traditional sectors change and they embrace open-source hardware major transformations in open compute can be expected in the next decade.

Jean-Marie Verdun, Senior Strategist Open Platform at HPE said “The next decade will be as innovative as the last one. OCP never ceases to make iterations for meeting user requirements at ultra-large-scale data centers, and high-end users have adopted computing technologies widely. However, those high-end users are not the only players in the market. Industry users and small- and medium-sized enterprises should also be able to access deep innovation and achieve a sustainable technology developments. Although these users may not have the scale of larger companies, they can also enjoy the benefits of open compute infrastructure.”

Already, telecommunication, finance, gaming, e-commerce, medical, automotive manufacturing and other industries are looking to deploy IT infrastructure in line with open compute standards. As Omdia predicts, non-internet industries will make up 21.9% of the market by 2025, compared with 10.5% in 2020.

Gong Huiqin, Senior Manager of Basic Technology Laboratory at Industrial and Commercial Bank of China (ICBC) mentioned, under the goals of “Carbon neutrality” ,“Peak carbon dioxide emissions”and current operation and maintenance pressure, the future banking data centers will be able to embrace operation and maintenance featuring safety and reliability, green and energy conservation and convenience through open compute integrated with liquid cooling, separation of storage and computing and automatic operation and maintenance.

The past decade has seen a growing number of companies embracing open compute, which can be attributed to not only the unique technical advantages of open compute, but also ingenious design philosophy. As the open compute ecosystem improves and develops daily, the innovation boundary will be broken, and technological integrations exceed expectations. Technology reform, driven by open compute, might build the next-generation data centers beyond our imagination.

To download Open Compute White Paper, please click here.

About Inspur Information

Inspur Electronic Information Industry Co., LTD is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to https://www.inspursystems.com/.


Source: Inspur Electronic Information Industry Co., LTD

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between t Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-apples) datacenter and edge categories. Perhaps more interesti Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire