Grid Initiatives Part 2

By By Wolfgang Gentzsch, D-Grid, Duke University, and RENCI

February 5, 2007

In the first part of the article, we have mainly focused on the major results of our study on large community grid initiatives: the lessons learned and the recommendations for those who want to design, build and run similar grid infrastructures. Here we present additional general information about these six grid initiatives: The ChinaGrid, D-Grid, EGEE, NAREGI, TeraGrid, and the UK e-Science Initiative. This article is a summary of the report which can be downloaded via the weblink provided at the end of this article.

The ChinaGrid

In 2002, the Chinese Ministry of Education (MoE) launched the largest grid project in China, called the ChinaGrid, aiming at providing the nationwide grid computing platform and services for research and education among 100 key universities in China. The vision for the ChinaGrid project is to deploy the largest, most advanced and most practical grid computing project in China. The first phase of ChinaGrid was 2003-2005, with 12 key universities involved (20 at the end of 2004). At that time, the systems in the grid had a performance of about 6 Tflops, with 60 TB of storage.

The underlying infrastructure for ChinaGrid is CERNET, the China Education and Research Network, which began operation in 1994, covering more than 800 universities, colleges and institutes in China. Currently, it is the second largest nationwide network in China. The bandwidth of the CERNET backbone is (currently) 2.5 Gbps, connecting 7 cities, called local network centers. The bandwidth of the CERNET local backbone is 155Mbps.

The focus of the first stage of ChinaGrid is on the compute grid platform and on applications  (e-science). These applications are from a variety of scientific disciplines, from life science to computational physics. The second stage of ChinaGrid project is from 2007 to 20010, covering 30 to 40 key universities in China. The focus will extend from computational grid applications to information service grid (e-information), including applications for a distance learning grid, digital Olympic grid, etc. The third stage will be from 2011 to 2015, extending the coverage of the ChinaGrid project to all the 100 key universities. The focus of the third stage grid application will be even more diverse, including instrument sharing (e-instrument).

The underlying common grid computing middleware platform for the ChinaGrid project is called ChinaGrid Supporting Platform (CGSP), to support all of the above mentioned three stages: e-science, e-information, and e-instrument. CGSP integrates all kinds of resources in education and research environments, making the heterogeneous and dynamic nature of the resources transparent to the users, and providing high performance, high reliable, and secure, convenient and transparent grid services to the scientific computing and engineering research communities. CGSP provides both a ChinaGrid service portal, and a set of development environments for deploying various grid applications.

The current version, CGSP 2.0, is based on Globus Toolkit 4.0, and is WSRF [4] and OGSA [3] compatible. The previous version, CGSP 1.0, has been released in October 2004: with the 5 main building blocks: Grid Portal, Grid Development Toolkits, Information Service, Grid Management (consisting of Service container, Data manager, Job manager, and Domain manager), and Grid security.

EGEE and EGEE-II, Enabling Grids for E-sciencE

EGEE-II is the second phase of a 4-year program. The aim of first-phase EGEE was to build on recent advances in Grid technology and develop a service Grid infrastructure, providing researchers in academia and industry with access to major computing resources, independent of their geographic location. The EGEE project also focuses on attracting a wide range of new users to the Grid. The project concentrates primarily on three core areas:

  • The first area is to build a consistent, robust and secure Grid network that will attract and incorporate additional computing resources on demand.
  • The second area is to continuously improve and maintain the middleware in order to deliver reliable services to users.
  • The third area is to attract new users from industry as well as science and ensure they receive the high standard of training and support they need.

The EGEE Grid is built on the EU Research Network GÉANT and exploits Grid expertise generated by many EU, national and international Grid projects to date. In its first phase, EGEE comprised over 70 contractors and over 30 non-contracting participants, and was divided into 12 partner federations, covering a wide range of both scientific and industrial applications. With funding of over 30 million Euro from the European Commission (EC), the project was one of the largest of its kind. The initial focus of the project was on two application areas, namely High Energy Physics (HEP) and Biomedicine. The rationale behind this was that these fields were already grid-aware and would serve well as pilot areas for the development of the various EGEE Grid services.

The first phase provided the basis for assessing subsequent objectives and funding needs, and gave way to a second phase which started on 1 April 2006. This project saw its consortium grow to over 90 contractors and a further 48 non-contracting participants in 32 countries, and its funding levels to over 36 million Euro from the EC. It maintains its organizational structure into geographical federations. The EGEE Grid consists of over 20,000 CPU, in addition to about 10 Petabytes (10 million Gigabytes) of storage, and maintains on average 20,000 concurrent jobs. More than a two thousand scientists from all over the world submit over 17 million jobs during 2006, a 3-fold increase compared to 2005.

At present there are more than 20 applications from 9 domains on the EGEE Grid infrastructure: Astrophysics, Computational Chemistry, Earth Sciences, Financial Simulation, Fusion, Geophysics, High Energy Physics, Life Sciences, and Multimedia. In addition, there are several applications from the industrial sector running on the EGEE Grid, such as applications from geophysics and the plastics industry.

EGEE project now provides a stable and reliable Grid infrastructure with its own middleware stack, gLite. EGEE began work using the LCG-2 middleware, provided by the LCG project (which is itself based on the middleware from EU DataGrid, EGEE's predecessor). In parallel it produced the gLite middleware, using reengineered components from a number of sources to produce lightweight middleware that provides a full range of basic Grid services, part of which is based on Globus version 2.4. As of September 2006, gLite is at version 3.0, and comprises some 220 packages arranged in 34 logical deployment modules.

The German D-Grid Initiative

In 2003, the German scientific community publishing a strategic paper, examining the status and consequences of grid technology on scientific research in Germany and recommending a long-term strategic grid research and development initiative. This resulted in the German e-Science Initiative founded by the German Ministry for Research and Education (BMBF). In 2004, BMBF presented the vision of a new quality of digital scientific infrastructure which will enables our globally connected scientists to collaborate on an international basis; exchange information, documents and publications about their research work in real time; and guarantee efficiency and stability even with huge amounts of data from measurements, laboratories and computational results.

The e-Science Initiative and the first phase of D-Grid started in September 2005. BMBF is funding over 100 German research organizations with 100 Million Euro over the next 5 years. For the first 3-year phase of D-Grid, financial support is approximately 25 Million Euro. The goal is to design, build and operate a network of distributed, integrated and virtualized high-performance resources and related services to enable the processing of large amounts of scientific data and information. The Ministry for Research and Education is funding the assembling, set-up and operation of D-Grid in several overlapping stages:

  1. D-Grid, 2005-2008:  IT services for scientists. The global services infrastructure will be tested and used by Community Grids in the areas of  high-energy physics, astrophysics, medicine and life sciences, earth sciences (e.g. climate), engineering sciences, energy, and scientific libraries.
          
  2. D-Grid 2, 2007-2009:  IT services for scientists, industry, and business, including new applications in chemistry, biology, drug design, economy, visualization of data, and so on. Grid services providers will offer basic IT services to these users.
  3. D-Grid 3, 2008- 2010: it is intended to extend the grid infrastructure with an SLA and a knowledge management layer, and adding several virtual competence centres, encourage global service-oriented architectures in the industry, and use this grid infrastructure for the benefit of our whole society, among others.

D-Grid consists of the DGI Infrastructure project and (currently) of the following seven Community Grid projects: AstroGrid-D (Astronomy), C3-Grid (Earth Sciences), HEP Grid  (High-Energy Physics), InGrid (Engineering), MediGrid (Medical Research), TextGrid (Scientific Libraries, Humanities), and WISENT (Knowledge Network Energy Meteorology).

Short-term goal of D-Grid is to build a core grid infrastructure for the German scientific community, until the end of 2006. Then, first test and benchmark computations will be performed by the Community Grids, to provide technology feedback to DGI. Then, climate researchers of the C3-Grid, for example, will be able to predict climate changes faster and more accurately than before, to inform governments about potential environmental measures. Similarly, astrophysicists will be able to access and use radio-telescopes and supercomputers remotely via the grid, which they wouldn't be able to access otherwise, resulting in novel quality of research and the resulting data.

The D-Grid Infrastructure DGI is providing a set of basic grid middleware services offered to the Community Grids. So far, a core-grid infrastructure has been built for the community grids for testing, experimentation, and production. High-level services will be developed which guarantee security, reliable data access and transfer, and fair-use policies for computing resources. This core-grid infrastructure will then be further developed into a reliable, generic, long-term production platform which can be enhanced in a scalable and seamless way, such as the addition of new resources and services, distributed applications and data, and automated “on demand” provisioning of a support infrastructure.

DGI offers several grid middleware packages  (gLite, Globus und Unicore) and data management systems (SRB, dCache und OGSA-DAI). A support infrastructure helps new communities and Virtual Organizations (VOs) with the installation and integration of new grid resources via a central Information Portal. In addition, software tools for managing VOs are offered, based on VOMS and Shibboleth. Monitoring und Accounting prototypes for distributed grid resources exist, as well as an early concept for billing in D-Grid. DGI offers consulting for new Grid Communities in all technical aspects of network and security, e.g. firewalls in grid environments, alternative network protocols, and CERT (Computer Emergency Response Team). DGI partners operate “Registration Authorities” to support simple application of internationally accepted Grid Certificates from DFN (German Research Network organization) and GridKA (Grid Project Karlsruhe). DGI partners support new members to build their own „Registration Authorities”. The Portal Framework Gridsphere serves as the user interface. Within the D-Grid environment SRM/dCache takes care of the administration of large amount of scientific data.

The Japanese NAREGI Project

The National Research Grid Initiative NAREGI was created in 2003 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). From 2006, under the “Science Grid NAREGI” Program of the “Development and Application of Advanced High-performance Supercomputer project ” being promoted by MEXT, research and development is continuing to build on current results, while expanding in scope to include application environments for next-generation, peta-scale supercomputers.

The main objective of NAREGI is to research and develop grid middleware according to global standards to a level that can support practical operation, to implement a large-scale computing environment (the Science Grid) for widely-distributed, advanced research and education. NAREGI is carrying out R&D from two directions: through the grid middleware development at the National Institute of Informatics (NII), and through an applied experimental study using nano-applications, at the Institute for Molecular Science (IMS). These two organizations advance the project in cooperation with industry, universities and public research facilities. The National Institute for Informatics (NII) is promoting the construction of the Cyber Science Infrastructure (CSI), which is the base for next-generation academic research. A core technology of CSI is the science grid environment, and it will be made up of academic data networks like SuperSINET.

A large number of research bodies from academia and industry are participating in this program, with research and development of grid middleware centered at the National Institute of Informatics (NII), and empirical research into grid applications being promoted by the Institute for Molecular Science (IMS). Also, in order to promote use of grid technology in industry, the Industrial Committee for Super Computing Promotion gathers research topics from industry and promotes collaborative work between academic and industrial research bodies. The results of this research will support construction of the Cyber Science Infrastructure (CSI), which is the academic research base being promoted by NII, as well as construction of the peta-scale computing environment for scientific research. Through this, NAREGI will accelerate research and development in scientific fields, improve international cooperation, and strengthen competitiveness in an economically effective way.

The middleware being developed by NAREGI will present heterogeneous computation resources, including supercomputers and high-end servers connected by network, to users as a single, large, virtual computing resource. In order to build a global grid, the middleware is being developed according to the Science Grid environment standards specifications from the Open Grid Forum. The infrastructure provides a user-friendly environment to the user, who can then focus on his/her computational science research without concern for the scale of computing resources or environment required. High-throughput processing and meta-computing can be applied to large-scale analysis using the grid, allowing the supercomputers to be used to their maximum capabilities.

This environment allows multi-scale/multi-physics coupled simulations, which is becoming very important in computational sciences, in a heterogeneous environment. Resource allocation is suited to each application, so that coupled analysis can be done easily. Virtual Organizations (VOs), separate from the real organizations to which researchers and research bodies belong, can be formed dynamically on the Grid according to the needs of the research community.

In 2003, NAREGI developed a component technology based on UNICORE, and in 2004, released an alpha-version prototype of middleware based on UNICORE to test integrated middleware functions. In 2005, research and development was advanced on beta-version grid middleware, based on newly-established OGSA specifications, to align with global activity. This beta version was released as open-source software in May 2006, and included enhanced functions supporting virtual organizations. In 2007, NAREGI Version 1.0, based on this beta version, will be released. From 2008, the scope of research and development will be expanded to include application environments for next-generation supercomputers, and the results of this will be released as NAREGI Version 2.0 in 2010.

The UK e-Science Program

The UK e-Science program was proposed in November 2000 and launched in the following year. The total funding for the first phase was $240M with a sum of $30M allocated to a Core e-Science Program.  This was an activity across all the UK's Research Councils to develop generic technology solutions and generic middleware to enable e-Science and to form the basis for new commercial e-business software. This $30M funding was enhanced by an allocation of a further $40M from the Department of Trade and Industry which was required to be matched by equivalent funding from industry. The Core e-Science Program, which is managed by the UK Engineering and Physical Science Research Council (EPSRC) on behalf of all the Research Councils, is therefore the generic part of e-Science activities within the UK and thus ensured a viable infrastructure and coordination of the national effort.

The first phase of the Core e-Science Program (2001 – 2004) was structured around six key elements: A National e-Science Center linked to a network of Regional e-Science Grid Centers, Generic Grid Middleware and Demonstrator Projects; Grid Computer Science based Research Projects; Support for e-Science Application Pilot Projects; Participation in International Grid Projects and Activities; and Establishment of a Grid Network Support Team.

To ensure that researchers developing e-Science applications are properly supported, especially in the initial stages, the Grid Support Center was established. The UK Grid Support Center (see local activities) supports all aspects of the deployment, operation and maintenance of grid middleware and distributed resource management for the UK grid test-beds.  The Grid Network Team (GNT) works with application developers to help identify the network requirements and help map these on to existing technology. It also considers the long-term networking research issues required by the grid.

The second phase of the Core e-Science Program (2004 -2006) is based around six key activities: A National e-Science Center linked to a network of Regional e-Science Centers; Support activities for the UK e-Science Community; An Open Middleware Infrastructure Institute (OMII); A Digital Curation Center (DCC); New Exemplars for e-Science; Participation in International Grid Projects and Activities.

Of particular significance in the second phase are the OMII and DCC. The Open Middleware Infrastructure Institute (OMII) is an institute based at the University of Southampton, located in the School of Electronics and Computer Science. The vision for the OMII is to become the source for reliable, interoperable and open-source grid middleware, ensuring the continued success of grid-enabled e-Science in the UK.

The Digital Curation Center (DCC) supports UK institutions with the problems involved in storing, managing and preserving vast amount of digital data to ensure its enhancement and continuing long-term use. The purpose of this DCC is to provide a national focus for research into curation issues and to promote expertise and good practice, both nationally and internationally, for the management of all research outputs in digital format.  The DCC is based at the University of Edinburgh.

In addition to the UK e-Science program, there have been UK initiatives in the social sciences and the arts and humanities: The National Centre for e-Social Science has begun on an ambitious programme of developing e-Social Science tools and evaluating their social implications. Further, there is now an Arts and Humanities e-Science Support Centre which is creating a community around the uses of e-science in, for example, history and linguistics.

As a result of this initiative the UK e-Science program has enjoyed a number of strengths including:

  • An Advanced National Grid Infrastructure, which was built specifically for use with grid computing. The National Grid Service (NGS) is one of the facilities available to UK researchers which provides access to over 2000 processors, and over 36 TB of “data-grid” capacity.
  • Availability of Funding: new research and industrially related funding from the UK government and different funding bodies. Over $500M have been invested in the e-Science program over the last five years. This has been followed by smaller-scale funding more recently for e-social science and e-research in arts and humanities.
  • Industrial involvement: Over a 100 companies are involved in UK e-Science projects including IBM, Intel, Oracle, and Sun, and a vast number of other national and international industries in different domains ranging from finance to pharmacy.
  • The UK has extended its e-science capability to include not only the sciences and engineering, but also social sciences and arts and humanities, which will provide benefits across the academic community.
  • New research advances: Large scale multidisciplinary teams of scientist have worked together and made advances in a wide range of disciplines.

The US TeraGrid

TeraGrid is an open scientific discovery infrastructure combining leadership class resources at nine partner sites to create an integrated, persistent computational resource. Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country.

TeraGrid is coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the Resource Provider sites: Indiana University, Oak Ridge National Laboratory, National Center for Supercomputing Applications, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center, University of Chicago/Argonne National Laboratory, and the National Center for Atmospheric Research.

Terascale Initiatives 2000-2004: In response to the 1999 report by the PITAC President's Information Technology Advisory Committee, NSF embarked on a series of “Terascale” initiatives to acquire computers capable of trillions of operations per second (teraflops); disk-based storage systems with terabytes capacities; and GBps networks. In 2000, the $36 million Terascale Computing System award to PSC supported the deployment of a computer (named LeMieux) capable of 6 trillion operations per second. When LeMieux went online in 2001, it was the most powerful U.S. system committed to general academic research.

In 2001, NSF awarded $45 million to NCSA, SDSC, Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at California Institute of Technology, to establish a Distributed Terascale Facility (DTF). Aptly named the TeraGrid, this multi-year effort aimed to build and deploy the world's largest, fastest, most comprehensive, distributed infrastructure for general scientific research. The initial TeraGrid specifications included computers capable of performing 11.6 teraflops, disk-storage systems with capacities of more than 450 terabytes of data, visualization systems, data collections, integrated via grid middleware and linked through a 40-gigabits-per-second optical network.

In 2002, NSF made a $35 million Extensible Terascale Facility (ETF) award to expand the initial TeraGrid to include PSC and integrate PSC's LeMieux system. Resources in the ETF provide the national research community with more than 20 teraflops of computing power distributed among the five sites and nearly one petabyte of disk storage capacity.

In 2003, NSF made three Terascale Extensions awards totaling $10 million, to further expand the TeraGrid's capabilities. The new awards funded high-speed networking connections to link the TeraGrid with resources at Indiana and Purdue Universities, Oak Ridge National Laboratory, and the Texas Advanced Computing Center. Through these awards, the TeraGrid put neutron-scattering instruments, large data collections and other unique resources, as well as additional computing and visualization resources, within reach of the nation's research and education community.

In 2004, as a culmination of the DTF and ETF programs, the TeraGrid entered full production mode, providing coordinated, comprehensive services for general U.S. academic research.

The TeraGrid 2005-2010: In August 2005, NSF's newly created Office of Cyberinfrastructure extended support for the TeraGrid with a $150 million set of awards for operation, user support and enhancement of the TeraGrid facility. Using high-performance network connections, the TeraGrid now integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. As of early 2006, these integrated resources include more than 102 teraflops of computing capability and more than 15 petabytes of online and archival data storage with rapid access and retrieval over high-performance networks. Through the TeraGrid, researchers can access over 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research.

Acknowledgement:

This report has been funded by the Renaissance Computing Institute RENCI at the University of North Carolina in Chapel Hill. I want to thank all the people who have contributed to this report and who are listed in the report on http://www.renci.org/publications/reports.php.

About the Author:

Wolfgang Gentzsch is heading the German D-Grid Initiative. He is an adjunct professor at Duke and a visiting scientist at RENCI at UNC Chapel Hill, North Carolina. He is Co-Chair of the e-Infrastructure Reflection Group and a member of the Steering Group of the Open Grid Forum.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This