Taking Over the Reins: New OGF President Craig Lee

By Nicole Hemsoth

September 17, 2007

In this interview, new OGF president Craig Lee (of Aerospace Corporation) discusses a variety of topics, ranging from what he thinks will be his key focuses and challenges during his tenure to the importance of working with other bodies and embracing new technology areas.

— 

GRIDtoday: Congratulations on your appointment as the next president of OGF. Tell us a little bit about your background and why you wanted the job?

CRAIG LEE: My background is in parallel and distributed processing and over the years this naturally led to my involvement in grid computing and the Grid Forum. After discussing the opportunity with the OGF Board, and management at The Aerospace Corporation, I realized that the stars had aligned between my technical interests, my desire to serve the grid community and my corporate responsibilities.

I’ve seen this field evolve tremendously over the years. In grid computing, as in service-oriented architectures, utility computing, ubiquitous computing, etc., the key issue is the management of shared resources. For any of these technologies to be effective, there must be a critical mass of adoption in several key functional areas, such as catalogs and discovery, job submission, workflow management, resource virtualization and provisioning, and data management. 

We must also recognize that there will be a spectrum of solutions to meet different requirements == from lightweight mash-ups that enable rapid prototyping and deployment to applications that need robust security models and support for virtual organizations. This spectrum of industry needs requires precisely the kind of pervasive adoption efforts that are at the heart of OGF. OGF has a rich history and a bright future and I am excited to be serving as its next president.

Gt: How do you think your background in the aerospace industry, which I assume is very HPC-oriented, will help or hinder your ability to relate to and, in fact, to relay the promise of grids to, mainstream companies?

LEE: While there certainly is a lot of HPC in the aerospace industry, requirements actually run the gamut. There are datacenters and lots of data repositories. Resource integration and interoperability is a huge issue. Many of these same problems exist in mainstream companies.

With regard to The Aerospace Corporation, in particular, we are a non-profit, federally funded research and development center (FFRDC) for all space-related technologies. This means anything to do with satellites and their ground systems. Existing satellite ground systems are essentially grids, but were individually designed from the ground up and statically configured, with no particular distributed computing standards. There is tremendous momentum to make these systems commercial-off-the-shelf (COTS) through the use of service-oriented architectures to reduce acquisition and operation costs.  Hence, the adoption of service grids by the IBMs, Boeings and Lockheed-Martins of the world is a key goal.

Some people may feel that a non-profit is insensitive to market forces, but working at Aerospace Corporation may actually be an advantage when it comes to my work as president of OGF. Our corporate raison d’etre is to facilitate the maturation and adoption of useful technologies for space. When it comes to ground systems, this is not unlike the broad commercial marketplace. I have no other goal but to facilitate the adoption of the best technology as quickly as possible. This means bringing consensus and stability to the technical marketplace.

Gt: Obviously, the grid landscape has evolved quite a bit in the past couple of years — and a whole lot since the GGF’s inception. How important do you think it is for the OGF to stay aligned with changes in the industry?

LEE: Aligning with changes in the industry is a critical goal for OGF as an organization.  We’ve led some of these changes, such as the HPC Profile and the use of JSDL. We’ve also influenced work in the wider community. The GLUE information model, for instance, was developed for grid entities and we are now working with the DMTF to harmonize with their Common Information Model. We’ve also adopted technology where necessary and appropriate, such as using WS-Security in the HPC Profile. Another example of alignment with the broader landscape is the recasting of the Open Grid Service Infrastructure (OGSI) to use the emerging Web services specifications.

The fact is that all approaches to distributed computing require much the same fundamental capabilities, but different organizations in different market segments look at it in different ways. Harmonizing these efforts across organizations and getting a dominant practice in the marketplace is critical. I’m fond of saying it’s like getting different “tribes” that all use different nouns and verbs for essentially the same things to talk to one another.

 
Gt: What are your thoughts on pushing the commercial grid agenda?

LEE: Achieving commercially available grid components and services will enable entirely new areas and applications for research, industry, commerce, government and society. Only a few years ago, the Internet was an academic and scientific domain. Now, billions of people use it for everyday activities. We want and expect grids to produce similar benefits for both industry and research. From a research perspective, commercialization of grid technologies will enable low cost, off-the-shelf capabilities for scientific research and innovation — in much the same way as clusters did.

From an industry perspective, widely available grid products and services are critical to mainstream adoption. Grids can enable more automated interactions between companies, tighter integration of global operations and enhanced interoperability, all resulting in lower costs and greater competitive advantage. For an information society and economy, the possibilities are tremendous.

In the here-and-now, however, the commercial grid agenda is going to be a multi-faceted issue. Virtualization, service-oriented architectures, storage networks are all speaking to different commercial segments and being developed somewhat independently to address specific needs in those different contexts.

  
Gt: In a recent interview, current president Mark Linesch discussed the relationship between grid computing, virtualization and SOA. What are your thoughts on the importance of these technologies as the OGF continues to evolve? Are there any other complementary or derivative technologies that you think ought to be on the organization’s radar in the coming years?

LEE: OGF has started a set of activities whose goals are to harmonize the development of grids, service architectures and virtualization that I fully intend to continue. Server virtualization is having a huge impact on how data centers address the service provisioning problem. It allows them to provision a service through a virtual server on a cluster that can be dynamically assigned on-the-fly. Server virtualization also offers important capabilities for security by being able to isolate malicious processes.

How can we support this same kind of capability at scale in a distributed environment?  Grids enable server virtualization to be pooled, aggregated and managed across sites.  Grids enable policy-driven usage of these virtual resources whereby loads, completion times and graceful failover can all be transparently managed. OGF’s Grid and Virtualization Working Group is an example of an effort within OGF that’s looking at some of the intersection between grids and virtualization for instance.

Service-oriented architectures, or simply service architectures, have a natural resonance with grids. The find-bind-use concept is native to both. Again, this is an instance of where different approaches and implementations have to be harmonized. The notion of service objects and data objects has a strong similarity to WSRF, which originated in GGF and then was sent through the OASIS process to get buy-in from the larger Web services community. OGF must forge alliances with other organizations such as the Open SOA Consortium to bring consensus to the marketplace.

Another important development is Web 2.0, which offers an easy way to do rapid prototyping of distributed systems. Just because of this simplicity and ease of use, real communities of use will grow up around it. This is especially telling since many people complain that traditional grid tools and toolkits are too complex and cumbersome to install, use and maintain. I think that there needs to be a continuum of tools — from the easy-to-use, very lightweight mash-up tools like Web 2.0 that have simple security and discovery models, to more complete, traditional grid tools that have robust security models, support for virtual organizations, attribute-based authorization, etc. There should be a growth path between the two extremes whereby additional capabilities can be added as needed, when necessary.

It’s very interesting to note that half of all the registered Web 2.0 URLs are GoogleMaps-related. That is to say, they are geospatial in nature. It’s probably no accident that Google is pushing Keyhole Markup Language (KML) through the Open Geospatial Consortium (OGC) standardization process. Equally interesting, and certainly no accident, is that OGF is starting a collaboration with OGC to integrate its standard geospatial tools with grid-based, distributed resource management. To start with, we want to back-end their Web Processing Service (WPS) with grid computing resources to enable large-scale processing. The WPS could also be used as a front-end to interface to multiple grid infrastructures, such as TeraGrid, NAREGI, EGEE, and the United Kingdom’s National Grid Service. This would be a serious application driver for both grid and data interoperability issues. When integrated with their Catalog Service for Web and Web Map/Feature/Coverage Services, we would enable a whole raft of geospatial applications on a scale not done before, including things like satellite ground systems. The goal is not just to do science, but to greatly enhance things like operational hurricane forecasting, location-based services and anything to do with putting data on a map.

 
Gt: What is your expectation of how OGF membership demographics might shift over the next few years, especially in terms of presence of end-users, IT managers, CIOs, vendors, developers, academics, research, industry, etc?

LEE: We definitely want to see more direct involvement by industry while preserving our historical constituency of research and academia. It’s certainly true that grid computing grew out of HPC efforts at national labs and universities, but the technologies developed are so fundamental and widely applicable that we have to make every effort to achieve consensus in the marketplace of ideas. In a sense, the fact that there are so many related activities by different groups that are looking at different parts of the elephant is a good problem to have  Getting these different “tribes” to work together requires constant attention. To do this we must engage at every level; from CIOs that are making strategic corporate decisions, to technical project leaders that are where the rubber hits the road.

OGF is also in a unique position to align both world-class research and technical expertise with industrial adoption  Ulf Dahlsten (director of Emerging Technologies and Infrastructures–Applications, Directorate-General for Information Society of the European Commission) has a briefing slide that illustrates the spectrum of technology development from research on one end to commercial products/services on the other, with an “Innovation No Man’s Land” in the middle. I firmly believe that OGF’s mission is to bridge that no man’s land. To that end, we need to ensure the right distribution in the OGF demographics.

Gt: Overall, what do you foresee as the top three issues and goals that will dominate the agenda during your term? What can members of the OGF, and the greater grid community, expect from the OGF during your tenure as president?

LEE: In general I will be pursuing several agenda items during my term as president, including:

  • Continuing to promote widespread adoption of OGF standards. This means bringing consensus and stability to the technical marketplace. This will require lots of interaction and leadership from the entire community – vendors, developers, researchers and, most of all, users. Current OGF specifications such as JDSL and OGSA-BES are good candidates here, but I’d certainly like to see OGSA-DAI and SAGA get serious attention.
  • Pursuing more direct industry collaboration and involvement. There are numerous places where industry is beginning to pick-up grid tools (e.g., financial organizations, pharmaceuticals and storage networks). The overlap between traditional grids and developing marketplaces, like virtualization, means that we have to engage directly and bring value to the table. The OGF Data Center Reference Model that Paul Strong and Dave Snelling are helping to champion should be a guiding light and provide a good foundation in this area. I also intend to fully utilize the collaboration with OGC to engage the commercial geospatial community.
  • Reinforcing a solid commitment to our core constituency. The backbone of OGF is our active members that continue to come around the OGF “watercooler” to find like-minded groups and build consensus for common tools that are necessary to support their goals. This requires a spirit of collaboration and outreach that we want to extend to all segments and application domains. By doing so, we will be “managing the technological maturation process” from emerging technology markets to mainstream adoption for the benefit of everyone. 

 
Gt: Is there anything else you would like to say to our readers?

LEE: I’d certainly like to say that I am honored to be given this opportunity to serve OGF and the grid community. We have a great group of people, all dedicated to accelerating the adoption of grid technologies, but we still have a lot of work ahead of us.  The success of OGF depends upon our “volunteer army” and I want encourage everyone to stay active and engaged.

I would also like to thank Mark Linesch for his tremendous help as I transition into my new role and for his excellent service to OGF for the last three years, which was a period of great change, challenge and opportunity for our community.

Lastly, I invite anyone that is interested in grid and the work of the OGF to come to our next event, OGF21, being held Oct. 15- 19 in Seattle. OGF21 will feature an exceptional technical program, workshops on software solutions and scientific applications, and an enterprise track focused on Grid use in IT datacenters. More information can be found on the OGF website at www.ogf.org.

Thank you.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., is announcing a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascade Lake-AP) in t Read more…

By Tiffany Trader

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance family on Google Compute Engine. The instances are powered by t Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial research. Can you discuss key developments in recent years? Read more…

By Steve Conway, Hyperion

The Barcelona Supercomputing Center Offers a Virtual Tour of Its MareNostrum Supercomputer

July 6, 2020

With the COVID-19 pandemic continuing to threaten the world and disrupt normal operations, facility tours remain a little difficult to operate, with many supercomputing centers having shuttered facility tours for visitor Read more…

By Oliver Peckham

What’s New in Computing vs. COVID-19: Fugaku, Congress, De Novo Design & More

July 2, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instructio Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance fam Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This