Taking Over the Reins: New OGF President Craig Lee

By Nicole Hemsoth

September 17, 2007

In this interview, new OGF president Craig Lee (of Aerospace Corporation) discusses a variety of topics, ranging from what he thinks will be his key focuses and challenges during his tenure to the importance of working with other bodies and embracing new technology areas.

— 

GRIDtoday: Congratulations on your appointment as the next president of OGF. Tell us a little bit about your background and why you wanted the job?

CRAIG LEE: My background is in parallel and distributed processing and over the years this naturally led to my involvement in grid computing and the Grid Forum. After discussing the opportunity with the OGF Board, and management at The Aerospace Corporation, I realized that the stars had aligned between my technical interests, my desire to serve the grid community and my corporate responsibilities.

I’ve seen this field evolve tremendously over the years. In grid computing, as in service-oriented architectures, utility computing, ubiquitous computing, etc., the key issue is the management of shared resources. For any of these technologies to be effective, there must be a critical mass of adoption in several key functional areas, such as catalogs and discovery, job submission, workflow management, resource virtualization and provisioning, and data management. 

We must also recognize that there will be a spectrum of solutions to meet different requirements == from lightweight mash-ups that enable rapid prototyping and deployment to applications that need robust security models and support for virtual organizations. This spectrum of industry needs requires precisely the kind of pervasive adoption efforts that are at the heart of OGF. OGF has a rich history and a bright future and I am excited to be serving as its next president.

Gt: How do you think your background in the aerospace industry, which I assume is very HPC-oriented, will help or hinder your ability to relate to and, in fact, to relay the promise of grids to, mainstream companies?

LEE: While there certainly is a lot of HPC in the aerospace industry, requirements actually run the gamut. There are datacenters and lots of data repositories. Resource integration and interoperability is a huge issue. Many of these same problems exist in mainstream companies.

With regard to The Aerospace Corporation, in particular, we are a non-profit, federally funded research and development center (FFRDC) for all space-related technologies. This means anything to do with satellites and their ground systems. Existing satellite ground systems are essentially grids, but were individually designed from the ground up and statically configured, with no particular distributed computing standards. There is tremendous momentum to make these systems commercial-off-the-shelf (COTS) through the use of service-oriented architectures to reduce acquisition and operation costs.  Hence, the adoption of service grids by the IBMs, Boeings and Lockheed-Martins of the world is a key goal.

Some people may feel that a non-profit is insensitive to market forces, but working at Aerospace Corporation may actually be an advantage when it comes to my work as president of OGF. Our corporate raison d’etre is to facilitate the maturation and adoption of useful technologies for space. When it comes to ground systems, this is not unlike the broad commercial marketplace. I have no other goal but to facilitate the adoption of the best technology as quickly as possible. This means bringing consensus and stability to the technical marketplace.

Gt: Obviously, the grid landscape has evolved quite a bit in the past couple of years — and a whole lot since the GGF’s inception. How important do you think it is for the OGF to stay aligned with changes in the industry?

LEE: Aligning with changes in the industry is a critical goal for OGF as an organization.  We’ve led some of these changes, such as the HPC Profile and the use of JSDL. We’ve also influenced work in the wider community. The GLUE information model, for instance, was developed for grid entities and we are now working with the DMTF to harmonize with their Common Information Model. We’ve also adopted technology where necessary and appropriate, such as using WS-Security in the HPC Profile. Another example of alignment with the broader landscape is the recasting of the Open Grid Service Infrastructure (OGSI) to use the emerging Web services specifications.

The fact is that all approaches to distributed computing require much the same fundamental capabilities, but different organizations in different market segments look at it in different ways. Harmonizing these efforts across organizations and getting a dominant practice in the marketplace is critical. I’m fond of saying it’s like getting different “tribes” that all use different nouns and verbs for essentially the same things to talk to one another.

 
Gt: What are your thoughts on pushing the commercial grid agenda?

LEE: Achieving commercially available grid components and services will enable entirely new areas and applications for research, industry, commerce, government and society. Only a few years ago, the Internet was an academic and scientific domain. Now, billions of people use it for everyday activities. We want and expect grids to produce similar benefits for both industry and research. From a research perspective, commercialization of grid technologies will enable low cost, off-the-shelf capabilities for scientific research and innovation — in much the same way as clusters did.

From an industry perspective, widely available grid products and services are critical to mainstream adoption. Grids can enable more automated interactions between companies, tighter integration of global operations and enhanced interoperability, all resulting in lower costs and greater competitive advantage. For an information society and economy, the possibilities are tremendous.

In the here-and-now, however, the commercial grid agenda is going to be a multi-faceted issue. Virtualization, service-oriented architectures, storage networks are all speaking to different commercial segments and being developed somewhat independently to address specific needs in those different contexts.

  
Gt: In a recent interview, current president Mark Linesch discussed the relationship between grid computing, virtualization and SOA. What are your thoughts on the importance of these technologies as the OGF continues to evolve? Are there any other complementary or derivative technologies that you think ought to be on the organization’s radar in the coming years?

LEE: OGF has started a set of activities whose goals are to harmonize the development of grids, service architectures and virtualization that I fully intend to continue. Server virtualization is having a huge impact on how data centers address the service provisioning problem. It allows them to provision a service through a virtual server on a cluster that can be dynamically assigned on-the-fly. Server virtualization also offers important capabilities for security by being able to isolate malicious processes.

How can we support this same kind of capability at scale in a distributed environment?  Grids enable server virtualization to be pooled, aggregated and managed across sites.  Grids enable policy-driven usage of these virtual resources whereby loads, completion times and graceful failover can all be transparently managed. OGF’s Grid and Virtualization Working Group is an example of an effort within OGF that’s looking at some of the intersection between grids and virtualization for instance.

Service-oriented architectures, or simply service architectures, have a natural resonance with grids. The find-bind-use concept is native to both. Again, this is an instance of where different approaches and implementations have to be harmonized. The notion of service objects and data objects has a strong similarity to WSRF, which originated in GGF and then was sent through the OASIS process to get buy-in from the larger Web services community. OGF must forge alliances with other organizations such as the Open SOA Consortium to bring consensus to the marketplace.

Another important development is Web 2.0, which offers an easy way to do rapid prototyping of distributed systems. Just because of this simplicity and ease of use, real communities of use will grow up around it. This is especially telling since many people complain that traditional grid tools and toolkits are too complex and cumbersome to install, use and maintain. I think that there needs to be a continuum of tools — from the easy-to-use, very lightweight mash-up tools like Web 2.0 that have simple security and discovery models, to more complete, traditional grid tools that have robust security models, support for virtual organizations, attribute-based authorization, etc. There should be a growth path between the two extremes whereby additional capabilities can be added as needed, when necessary.

It’s very interesting to note that half of all the registered Web 2.0 URLs are GoogleMaps-related. That is to say, they are geospatial in nature. It’s probably no accident that Google is pushing Keyhole Markup Language (KML) through the Open Geospatial Consortium (OGC) standardization process. Equally interesting, and certainly no accident, is that OGF is starting a collaboration with OGC to integrate its standard geospatial tools with grid-based, distributed resource management. To start with, we want to back-end their Web Processing Service (WPS) with grid computing resources to enable large-scale processing. The WPS could also be used as a front-end to interface to multiple grid infrastructures, such as TeraGrid, NAREGI, EGEE, and the United Kingdom’s National Grid Service. This would be a serious application driver for both grid and data interoperability issues. When integrated with their Catalog Service for Web and Web Map/Feature/Coverage Services, we would enable a whole raft of geospatial applications on a scale not done before, including things like satellite ground systems. The goal is not just to do science, but to greatly enhance things like operational hurricane forecasting, location-based services and anything to do with putting data on a map.

 
Gt: What is your expectation of how OGF membership demographics might shift over the next few years, especially in terms of presence of end-users, IT managers, CIOs, vendors, developers, academics, research, industry, etc?

LEE: We definitely want to see more direct involvement by industry while preserving our historical constituency of research and academia. It’s certainly true that grid computing grew out of HPC efforts at national labs and universities, but the technologies developed are so fundamental and widely applicable that we have to make every effort to achieve consensus in the marketplace of ideas. In a sense, the fact that there are so many related activities by different groups that are looking at different parts of the elephant is a good problem to have  Getting these different “tribes” to work together requires constant attention. To do this we must engage at every level; from CIOs that are making strategic corporate decisions, to technical project leaders that are where the rubber hits the road.

OGF is also in a unique position to align both world-class research and technical expertise with industrial adoption  Ulf Dahlsten (director of Emerging Technologies and Infrastructures–Applications, Directorate-General for Information Society of the European Commission) has a briefing slide that illustrates the spectrum of technology development from research on one end to commercial products/services on the other, with an “Innovation No Man’s Land” in the middle. I firmly believe that OGF’s mission is to bridge that no man’s land. To that end, we need to ensure the right distribution in the OGF demographics.

Gt: Overall, what do you foresee as the top three issues and goals that will dominate the agenda during your term? What can members of the OGF, and the greater grid community, expect from the OGF during your tenure as president?

LEE: In general I will be pursuing several agenda items during my term as president, including:

  • Continuing to promote widespread adoption of OGF standards. This means bringing consensus and stability to the technical marketplace. This will require lots of interaction and leadership from the entire community – vendors, developers, researchers and, most of all, users. Current OGF specifications such as JDSL and OGSA-BES are good candidates here, but I’d certainly like to see OGSA-DAI and SAGA get serious attention.
  • Pursuing more direct industry collaboration and involvement. There are numerous places where industry is beginning to pick-up grid tools (e.g., financial organizations, pharmaceuticals and storage networks). The overlap between traditional grids and developing marketplaces, like virtualization, means that we have to engage directly and bring value to the table. The OGF Data Center Reference Model that Paul Strong and Dave Snelling are helping to champion should be a guiding light and provide a good foundation in this area. I also intend to fully utilize the collaboration with OGC to engage the commercial geospatial community.
  • Reinforcing a solid commitment to our core constituency. The backbone of OGF is our active members that continue to come around the OGF “watercooler” to find like-minded groups and build consensus for common tools that are necessary to support their goals. This requires a spirit of collaboration and outreach that we want to extend to all segments and application domains. By doing so, we will be “managing the technological maturation process” from emerging technology markets to mainstream adoption for the benefit of everyone. 

 
Gt: Is there anything else you would like to say to our readers?

LEE: I’d certainly like to say that I am honored to be given this opportunity to serve OGF and the grid community. We have a great group of people, all dedicated to accelerating the adoption of grid technologies, but we still have a lot of work ahead of us.  The success of OGF depends upon our “volunteer army” and I want encourage everyone to stay active and engaged.

I would also like to thank Mark Linesch for his tremendous help as I transition into my new role and for his excellent service to OGF for the last three years, which was a period of great change, challenge and opportunity for our community.

Lastly, I invite anyone that is interested in grid and the work of the OGF to come to our next event, OGF21, being held Oct. 15- 19 in Seattle. OGF21 will feature an exceptional technical program, workshops on software solutions and scientific applications, and an enterprise track focused on Grid use in IT datacenters. More information can be found on the OGF website at www.ogf.org.

Thank you.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Help HPC Work Smarter and Accelerate Time to Insight

 

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19]

To recklessly misquote Jane Austen, it is a truth, universally acknowledged, that a company in possession of a highly complex problem must be in want of a massive technical computing cluster. Read more…

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This