Building Singapore’s National Grid

By Nicole Hemsoth

October 2, 2006

In this GRIDtoday Q&A, Hing-Yan Lee, deputy director of Singapore's National Grid Office, discusses his organization's work to establish a nationwide cyberinfrastructure with the purpose of improving economic and technological competitiveness. Lee is presenting this week at the Gelato ICE: Itanium Conference & Expo in Biopolis, Singapore.

GRIDtoday: To begin, can you give us a background on the National Grid Office? When was it established and was the impetus behind its creation?

HING-YAN LEE: The National Grid has the mission of transforming Singapore into a nation where compute resources can be interconnected via a next-generation cyberinfrastructure that allows the sharing of compute resources in a secure, reliable and efficient manner by authenticated users for education, commerce entertainment, R&D, and national security in order to improve the economic and technological competitiveness of the country. To this end, the National Grid Office (NGO) was established on Jan. 2, 2003, to fulfill the mission of the National Grid and promote the adoption of Grid computing in Singapore.

The National Grid achieves its mission by the following means:

  • Formulating the framework & policies.
  • Planning and developing a secure platform.
  • Adopting common open standards.
  • Encouraging the adoption of Grid computing.
  • Demonstrating the commercial viability of compute-resource-on-tap.
  • Laying the foundation for a vibrant Grid computing economy.

National Grid (Phase 1) was launched in November 2003 with several 1 Gbps high – speed networks connecting over 250 CPUs belonging to Agency for Science, Technology and Research (A*STAR) research institutes, National University of Singapore (NUS) and Nanyang Technological University (NTU). Compute resources have increased to nearly 1,000 CPUs, with some 15 Grid-enabled applications from the R&D community running on the National Grid Pilot Platform (NGPP). This successful linking up of the research institutes, universities and various government agencies has paved the way the strong industry participation in the next phase.

The National Grid (Phase 2) is co-funded by the two research councils of the A*STAR — Science & Engineering Research Council and Biomedical Research Council — the Defence Science & Technology Agency (DSTA), the Infocomm Development Authority (IDA) of Singapore, NUS and NTU. The focus is on promoting the adoption of Grid computing by industry and business users. Besides the R&D community, we see good potential in the digital media, collaborative manufacturing, engineering services and education sectors.

Gt: What is your position within the NGO? What are your responsibilities?

LEE: As deputy director at the National Grid Office, I direct, plan and coordinate the national initiative to realize a cyberinfrastructure for sharing and aggregating compute resources for R&D and industry. I am also project director of the National Grid Pilot Platform, and oversee the National Grid Competency Centre (NGCC) and the National Grid Operations Centre. I spend considerable amount of my time promoting Grid computing to potential users and meeting stakeholders.

Gt: Moving on to your presentation at Gelato ICE, where you'll be speaking about successful projects NGO has carried out, can you highlight a few of these projects right now?

LEE: The National Grid effort started off promoting adoption within the R&D community. The NGO, through NGCC, assists users to Grid-enable their applications and execute them over the NGPP. Projects include defense-related, physical sciences and life sciences applications.

To bring Grid to the industry, we have put in place measures to address the needs of the industry users. Multi-Organization Grid Accounting System (MOGAS) has been put in place to handle the metering and accounting information. To strengthen the security of the Grid, we have appointed Netrust Pte Ltd as our Certificate Authority. They are able to accommodate flexibility in implementing digital certificates usable in Globus. To ease the use of the grid, we have installed the LSF Meta-Scheduler (from Platform Computing) on the NGPP, which interoperates with local workload schedulers (e.g., PBS, LSF and N1 Grid Engine) on compute resources under different administrative domains.

Our foray into the industry started with the digital media sector, where we made available a pool of floating licenses of a commercial animation rendering software for use by the small and medium enterprises in digital media sector running on the NGPP resources, their own resources or a combination of both. The idea here is the aggregation of demand by these small and medium enterprises (SMEs) so that, collectively, the provision of the software can be sustained and, at the same time, the SMEs just need to pay for what they use instead of making hefty investments upfront.

On the international front, we participate in the CERN Large Hadron Collider Computational Grid (LCG) project. We are also active members of international bodies such as: Asia Pacific Advanced Network (APAN); Asia Pacific Grid Policy Management Authority (APGrid PMA); Asia Pacific Network Information Centre (APNIC); Gelato Federation; HP Collaboration and Competency Network (HPCCN); and Pacific Rim Application & Grid Middleware Assembly (PRAGMA).

We also promote regional cooperation through the Southeast Asian Grid Forum and facilitate collaboration between U.K. and Singapore researchers under the UK- Singapore Partners in Science program.

Gt: How has the use of Itanium-based compute resources contributed to the success of these projects?

LEE: The Itanium-based compute resources are part of our contributions in our participation in the LCG project. These resources have also been used extensively for R&D projects, including the Jet Flow Simulation by Temasek Labs; and Computational Identification of Human MicroRNA Targets Associated with Oncogenesis by Bioinformatics Institute. The first is a defense-related project which aims to understand the detailed dynamics of jet entrainment and mixing, which is of fundamental importance to various applications such as noise suppression, combustion, heat transfer and chemical reactors. The second project aims to aid diagnoses of cancer. The Itanium-based machines have also been used to run commercial applications such as animation rendering projects by digital media companies.

Gt: Sticking with Gelato's “Linux on Itanium” focus, I'm wondering how Linux played into these projects. Is Linux the common OS across the NGO's various projects?

LEE: In setting up the NGPP, we make use of existing computational resources, which resulted in a heterogeneous grid. As all these resources (including several Itanium clusters belonging to several participating organizations) use Linux as the OS, Linux as the common OS is an obvious choice.

In our SG@Schools program, we have established a sub-grid comprising Windows- based machines belonging to participating schools for their students to work on PC-Grid projects. However, the server for the PC-Grid remains Linux-based.

Gt: Across how many fields is (and has) NGO carrying out projects? Are fields specific to user communities, or are you also working on general software/middleware solutions?

LEE: In the spirit of the grid, NGO works closely in partnership with the local Grid community to achieve mission of the National Grid and participates in international collaboration.

The local Grid community takes the form of Virtual Grid Communities (VGC), Working Groups (WG) and Special Interest Groups (SIG). VGCs consist of like- minded individuals from the same domain who are keen to explore the use of Grid to further developments in their domain. The WGs comprise industry practitioners, academics and researchers who volunteer their time and expertise to provide technical advice. WGs formed include: Applications; Middleware & Architecture; Network; Security; and Governance & Policies. The SIGs are birds – of-the-same-feather that would evolve into full-fledged WGs over time, when the interests are clearly identified and that specific community reaches a critical mass for sustainability. The current SIGs focus on Systems Administration, Access Grid and PC Grid Computing. So, the WGs and SIGs are horizontal in nature, while the VGCs are vertical in nature. We are evolving these groups into the Singapore Grid Forum.

NGO also provides grants to researchers to work on Grid projects with funding support from A*STAR and IDA. Hitherto, 17 projects have been supported.

Gt: I'm interested in your focus on Virtual Grid Communities. Can you describe what NGO is doing to provide cyberinfrastructure capabilities to the life sciences, physical sciences, digital media and manufacturing communities?

LEE: Highways are useful only if there are vehicles to run on them. Likewise, there must be applications running on the NGPP. To focus our resources, we have identified key sectors that are likely to benefit from Grid computing. In consultation with the economic agencies that work closely with the business and industry communities, we direct our current efforts to physical sciences, life sciences, digital media, manufacturing and education.

We set up a VGC for each sector and provide secretariat support to bring the people together to brainstorm how Grid can benefit their domain. Worthy project proposals avail themselves to the various funding channels. The NGPP resources are available for the VGCs' use. We also provide manpower to Grid-enable the applications. The VGCs will get to showcase their work through symposia held in conjunction with GridAsia, our annual flagship conference. We also see potential in the finance, government and health care sectors.

Gt: Finally, I'd like to discuss a couple of other initiatives being undertaken by the NGO. Can you speak a little about what you're doing with the Grid Computing Competency Certification — an area where many are bemoaning a lack of qualified workers?

LEE: There is indeed a shortage of qualified workers in Grid computing. We started the Grid Computing Competency Certification (GCCC) to develop capabilities of the working IT professionals to enable them to meet the needs of the industry. We have established the GCCC Committee, comprising representatives from institutes of higher learning, for the management and administration of GCCC.

The GCCC consists of two parts. Part 1 provides a basic foundation in Grid computing, while Part 2 delves into more detail in key areas of Grid computing with emphases on various tracks, such as Grid Architect, Grid Programmer, Bioinformatics and Digital Media. Several training service providers have been appointed to conduct courses that embrace the syllabus of GCCC Part 1 and 2. Courses conducted by vendors and third-party trainers which are relevant to the syllabus have been accredited with credit points towards the certification.

We are heartened that three universities in Singapore plan to include the syllabus as part of their curriculums in their degree courses. This would be a long term solution to address the manpower shortage and to get Grid into the mainstream.

Gt: What kind of success have these projects and programs, as well as any other initiatives being undertaken by the NGO, had in terms of getting Singapore's commercial sector involved with Grid computing?

LEE: We are happy with the level of Grid adoption by the R&D community and will continue to ramp up our efforts. Moving into the business and industry sectors is a totally different ball game.

As with new technologies, we need to create awareness of the benefits and identify the business drivers. With the R&D community, because the applications are either developed by the researchers or based on open source, the availability of educational and non-commercial software licensing is not a great issue to surmount. For commercial applications, the current software licensing model needs to evolve to one that makes economic sense for both the ISVs and users before the latter can harness large amount of computation resources. To this end, we have started proofs-of-concepts with several ISVs and users to further understand new licensing models. We are also working with several companies and organizations on pilot enterprise Grid projects.

Gt: How do you think Singapore's experiences compare to what's going on with worldwide commercial involvement with Grid?

LEE: It is still the early days as far as our efforts to promote adoption of Grid computing by business and commercial users. We are encouraged by the number of digital media SMEs that have been using the grid resources for commercial work in the past year and the pipeline of similar projects coming onboard. Newer endeavors on enterprise Grid projects have only started recently.

About Hing-Yan Lee

Dr. Hing-Yan Lee, on secondment from his principal scientist position at the Institute for Infocomm Research, is the deputy director at the Singapore National Grid Office (NGO), where he directs, plans and coordinates the national initiative to realize a cyberinfrastructure for sharing and aggregating compute resources for R&D and industry. He is concurrently the project director of the National Grid Pilot Platform. Hing-Yan previously worked at the Kent Ridge Digital Labs, Japan-Singapore Artificial Intelligence Center and Information Technology Institute. He graduated from the University of Illinois at Urbana-Champaign with Ph.D. and MS degrees in Computer Science. He previously studied at the Imperial College (United Kingdom), where he obtained a BSc Eng. in Computing and an MSc in Management Science.
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire