Building Singapore’s National Grid

By Nicole Hemsoth

October 2, 2006

In this GRIDtoday Q&A, Hing-Yan Lee, deputy director of Singapore's National Grid Office, discusses his organization's work to establish a nationwide cyberinfrastructure with the purpose of improving economic and technological competitiveness. Lee is presenting this week at the Gelato ICE: Itanium Conference & Expo in Biopolis, Singapore.

GRIDtoday: To begin, can you give us a background on the National Grid Office? When was it established and was the impetus behind its creation?

HING-YAN LEE: The National Grid has the mission of transforming Singapore into a nation where compute resources can be interconnected via a next-generation cyberinfrastructure that allows the sharing of compute resources in a secure, reliable and efficient manner by authenticated users for education, commerce entertainment, R&D, and national security in order to improve the economic and technological competitiveness of the country. To this end, the National Grid Office (NGO) was established on Jan. 2, 2003, to fulfill the mission of the National Grid and promote the adoption of Grid computing in Singapore.

The National Grid achieves its mission by the following means:

  • Formulating the framework & policies.
  • Planning and developing a secure platform.
  • Adopting common open standards.
  • Encouraging the adoption of Grid computing.
  • Demonstrating the commercial viability of compute-resource-on-tap.
  • Laying the foundation for a vibrant Grid computing economy.

National Grid (Phase 1) was launched in November 2003 with several 1 Gbps high – speed networks connecting over 250 CPUs belonging to Agency for Science, Technology and Research (A*STAR) research institutes, National University of Singapore (NUS) and Nanyang Technological University (NTU). Compute resources have increased to nearly 1,000 CPUs, with some 15 Grid-enabled applications from the R&D community running on the National Grid Pilot Platform (NGPP). This successful linking up of the research institutes, universities and various government agencies has paved the way the strong industry participation in the next phase.

The National Grid (Phase 2) is co-funded by the two research councils of the A*STAR — Science & Engineering Research Council and Biomedical Research Council — the Defence Science & Technology Agency (DSTA), the Infocomm Development Authority (IDA) of Singapore, NUS and NTU. The focus is on promoting the adoption of Grid computing by industry and business users. Besides the R&D community, we see good potential in the digital media, collaborative manufacturing, engineering services and education sectors.

Gt: What is your position within the NGO? What are your responsibilities?

LEE: As deputy director at the National Grid Office, I direct, plan and coordinate the national initiative to realize a cyberinfrastructure for sharing and aggregating compute resources for R&D and industry. I am also project director of the National Grid Pilot Platform, and oversee the National Grid Competency Centre (NGCC) and the National Grid Operations Centre. I spend considerable amount of my time promoting Grid computing to potential users and meeting stakeholders.

Gt: Moving on to your presentation at Gelato ICE, where you'll be speaking about successful projects NGO has carried out, can you highlight a few of these projects right now?

LEE: The National Grid effort started off promoting adoption within the R&D community. The NGO, through NGCC, assists users to Grid-enable their applications and execute them over the NGPP. Projects include defense-related, physical sciences and life sciences applications.

To bring Grid to the industry, we have put in place measures to address the needs of the industry users. Multi-Organization Grid Accounting System (MOGAS) has been put in place to handle the metering and accounting information. To strengthen the security of the Grid, we have appointed Netrust Pte Ltd as our Certificate Authority. They are able to accommodate flexibility in implementing digital certificates usable in Globus. To ease the use of the grid, we have installed the LSF Meta-Scheduler (from Platform Computing) on the NGPP, which interoperates with local workload schedulers (e.g., PBS, LSF and N1 Grid Engine) on compute resources under different administrative domains.

Our foray into the industry started with the digital media sector, where we made available a pool of floating licenses of a commercial animation rendering software for use by the small and medium enterprises in digital media sector running on the NGPP resources, their own resources or a combination of both. The idea here is the aggregation of demand by these small and medium enterprises (SMEs) so that, collectively, the provision of the software can be sustained and, at the same time, the SMEs just need to pay for what they use instead of making hefty investments upfront.

On the international front, we participate in the CERN Large Hadron Collider Computational Grid (LCG) project. We are also active members of international bodies such as: Asia Pacific Advanced Network (APAN); Asia Pacific Grid Policy Management Authority (APGrid PMA); Asia Pacific Network Information Centre (APNIC); Gelato Federation; HP Collaboration and Competency Network (HPCCN); and Pacific Rim Application & Grid Middleware Assembly (PRAGMA).

We also promote regional cooperation through the Southeast Asian Grid Forum and facilitate collaboration between U.K. and Singapore researchers under the UK- Singapore Partners in Science program.

Gt: How has the use of Itanium-based compute resources contributed to the success of these projects?

LEE: The Itanium-based compute resources are part of our contributions in our participation in the LCG project. These resources have also been used extensively for R&D projects, including the Jet Flow Simulation by Temasek Labs; and Computational Identification of Human MicroRNA Targets Associated with Oncogenesis by Bioinformatics Institute. The first is a defense-related project which aims to understand the detailed dynamics of jet entrainment and mixing, which is of fundamental importance to various applications such as noise suppression, combustion, heat transfer and chemical reactors. The second project aims to aid diagnoses of cancer. The Itanium-based machines have also been used to run commercial applications such as animation rendering projects by digital media companies.

Gt: Sticking with Gelato's “Linux on Itanium” focus, I'm wondering how Linux played into these projects. Is Linux the common OS across the NGO's various projects?

LEE: In setting up the NGPP, we make use of existing computational resources, which resulted in a heterogeneous grid. As all these resources (including several Itanium clusters belonging to several participating organizations) use Linux as the OS, Linux as the common OS is an obvious choice.

In our [email protected] program, we have established a sub-grid comprising Windows- based machines belonging to participating schools for their students to work on PC-Grid projects. However, the server for the PC-Grid remains Linux-based.

Gt: Across how many fields is (and has) NGO carrying out projects? Are fields specific to user communities, or are you also working on general software/middleware solutions?

LEE: In the spirit of the grid, NGO works closely in partnership with the local Grid community to achieve mission of the National Grid and participates in international collaboration.

The local Grid community takes the form of Virtual Grid Communities (VGC), Working Groups (WG) and Special Interest Groups (SIG). VGCs consist of like- minded individuals from the same domain who are keen to explore the use of Grid to further developments in their domain. The WGs comprise industry practitioners, academics and researchers who volunteer their time and expertise to provide technical advice. WGs formed include: Applications; Middleware & Architecture; Network; Security; and Governance & Policies. The SIGs are birds – of-the-same-feather that would evolve into full-fledged WGs over time, when the interests are clearly identified and that specific community reaches a critical mass for sustainability. The current SIGs focus on Systems Administration, Access Grid and PC Grid Computing. So, the WGs and SIGs are horizontal in nature, while the VGCs are vertical in nature. We are evolving these groups into the Singapore Grid Forum.

NGO also provides grants to researchers to work on Grid projects with funding support from A*STAR and IDA. Hitherto, 17 projects have been supported.

Gt: I'm interested in your focus on Virtual Grid Communities. Can you describe what NGO is doing to provide cyberinfrastructure capabilities to the life sciences, physical sciences, digital media and manufacturing communities?

LEE: Highways are useful only if there are vehicles to run on them. Likewise, there must be applications running on the NGPP. To focus our resources, we have identified key sectors that are likely to benefit from Grid computing. In consultation with the economic agencies that work closely with the business and industry communities, we direct our current efforts to physical sciences, life sciences, digital media, manufacturing and education.

We set up a VGC for each sector and provide secretariat support to bring the people together to brainstorm how Grid can benefit their domain. Worthy project proposals avail themselves to the various funding channels. The NGPP resources are available for the VGCs' use. We also provide manpower to Grid-enable the applications. The VGCs will get to showcase their work through symposia held in conjunction with GridAsia, our annual flagship conference. We also see potential in the finance, government and health care sectors.

Gt: Finally, I'd like to discuss a couple of other initiatives being undertaken by the NGO. Can you speak a little about what you're doing with the Grid Computing Competency Certification — an area where many are bemoaning a lack of qualified workers?

LEE: There is indeed a shortage of qualified workers in Grid computing. We started the Grid Computing Competency Certification (GCCC) to develop capabilities of the working IT professionals to enable them to meet the needs of the industry. We have established the GCCC Committee, comprising representatives from institutes of higher learning, for the management and administration of GCCC.

The GCCC consists of two parts. Part 1 provides a basic foundation in Grid computing, while Part 2 delves into more detail in key areas of Grid computing with emphases on various tracks, such as Grid Architect, Grid Programmer, Bioinformatics and Digital Media. Several training service providers have been appointed to conduct courses that embrace the syllabus of GCCC Part 1 and 2. Courses conducted by vendors and third-party trainers which are relevant to the syllabus have been accredited with credit points towards the certification.

We are heartened that three universities in Singapore plan to include the syllabus as part of their curriculums in their degree courses. This would be a long term solution to address the manpower shortage and to get Grid into the mainstream.

Gt: What kind of success have these projects and programs, as well as any other initiatives being undertaken by the NGO, had in terms of getting Singapore's commercial sector involved with Grid computing?

LEE: We are happy with the level of Grid adoption by the R&D community and will continue to ramp up our efforts. Moving into the business and industry sectors is a totally different ball game.

As with new technologies, we need to create awareness of the benefits and identify the business drivers. With the R&D community, because the applications are either developed by the researchers or based on open source, the availability of educational and non-commercial software licensing is not a great issue to surmount. For commercial applications, the current software licensing model needs to evolve to one that makes economic sense for both the ISVs and users before the latter can harness large amount of computation resources. To this end, we have started proofs-of-concepts with several ISVs and users to further understand new licensing models. We are also working with several companies and organizations on pilot enterprise Grid projects.

Gt: How do you think Singapore's experiences compare to what's going on with worldwide commercial involvement with Grid?

LEE: It is still the early days as far as our efforts to promote adoption of Grid computing by business and commercial users. We are encouraged by the number of digital media SMEs that have been using the grid resources for commercial work in the past year and the pipeline of similar projects coming onboard. Newer endeavors on enterprise Grid projects have only started recently.

About Hing-Yan Lee

Dr. Hing-Yan Lee, on secondment from his principal scientist position at the Institute for Infocomm Research, is the deputy director at the Singapore National Grid Office (NGO), where he directs, plans and coordinates the national initiative to realize a cyberinfrastructure for sharing and aggregating compute resources for R&D and industry. He is concurrently the project director of the National Grid Pilot Platform. Hing-Yan previously worked at the Kent Ridge Digital Labs, Japan-Singapore Artificial Intelligence Center and Information Technology Institute. He graduated from the University of Illinois at Urbana-Champaign with Ph.D. and MS degrees in Computer Science. He previously studied at the Imperial College (United Kingdom), where he obtained a BSc Eng. in Computing and an MSc in Management Science.
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ABB Upgrades Produce Up to 30 Percent Energy Reduction for HPE Supercomputers

June 6, 2020

The world’s supercomputers are currently allied in a common goal: defeating COVID-19. To analyze the billions upon billions of molecules that might produce helpful therapeutics (or even a vaccine), an unimaginable amou Read more…

By Oliver Peckham

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This