Meet Sun’s Director of Grid Computing Engineering

By Nicole Hemsoth

December 11, 2006

Fritz Ferstl is the director of Grid Computing Engineering at Sun Microsystems. He manages a multi-national team of development engineers, project and product managers located in Prague, Czech Republic, Regensburg, Germany and Menlo Park, California. Ferstl is recognized as a world expert in productizing Grid solutions for robust enterprise and industrial services. Recently GRIDtoday had the opportunity to ask Ferstl about the Grid Engine, the Grid computing software product that he architected.

GRIDtoday: There are still many notions of what Grid computing is. How do you define Grid?

FRITZ FERSTL: A Grid to me is the combination of distributed resources with a corresponding management infrastructure hosting at least one type of service or workload. This is a very generic definition but we make the experience every day that the same Grid technologies are utilized in extremely diverse application scenarios. So I see no reason to limit the scope of Grid computing artificially.

I also would not use crossing of organizational or geographical boundaries as an identifier for Grids. Whether a Grid needs to orchestrate resources across such boundaries or not is largely an application scenario question. Grids which are local to an organization today may grow a need to go across organizations or geographies over time. In both cases, a certain set of basic technologies is being used. Why should we refer to those cases differently?

Gt: What are the most important challenges to universal Grid adoption? Standards? Interoperability?

FERSTL: Crawling comes before walking. I think this simple fact has been neglected a bit in the early days of Grid computing. And to some extent this is still the case. Standards have been driven forward without making sure they really address the most pressing needs of Grid adopters in commerce and research. There was also a lack of stability, both in the standard efforts themselves and in the infrastructures delivering them. The dangers are frustration of early adopters and bifurcation of efforts. Both effects have been observable very clearly.

I actually would argue that there is no shortage of standards and interoperability at all. And if there is in some specific areas, growing demand will highlight those cases and market forces will make sure there is resolution.

In my opinion, the most pressing need is execution. We need to fulfill the promises of Grid computing. We need to provide dependable solutions. Components which need to work in concert for such solutions need to be open and we have to integrate them into reliable infrastructure. If a standard is required in such a context then let's identify and drive it but let's stay focused on the problem.

Grid computing has become main stream and the hype-days are over. Maybe that's less exciting for some people. But for others, like me, there's nothing more exciting than seeing next-generation airplanes, cars, chips being designed, spectacular advances in pharmaceutical research and bio-technology being made or completely new approaches in finance, commerce, energy and telecommunications being developed using Grid technology!


Gt:
How is Sun facilitating the adoption of Grid?

FERSTL: We are building industry leading, dependable and open products as well as solutions in the Grid space and in the adjacent technology areas. Examples are OpenSolaris & Solaris, OpenSparc and our Sparc and x64 based servers, OpenJava and Java, Grid Engine and Sun N1 Grid Engine, Identity Management and more. We adhere to and drive applicable standards, such as around Web Services, identity management or DRMAA.


Gt:
Can you give us some background on the Sun N1 Grid Engine and Grid Engine? What do you think is unique about these offerings?

FERSTL: Sun N1 Grid Engine is an industry leading workload management solution with unmatched functionality and scalability. Full 24×7 support for it is available from Sun on all major platforms including Solaris Sparc/x64 and various flavors of Linux, Microsoft Windows, Max OS/X, IBM AIX, HP-UX and SGI Irix.

But what differentiates it most from similar products is its' openness. It's available for free unlimited trial from the Sun web site. Moreover, it is developed in the Grid Engine open source project http://gridengine.sunsource.net/ under a flexible and open license. This openness has led to a huge adoption with many thousands of sites in basically all market areas.

We are further emphasizing this openness by adding new as well as previously proprietary technology to the Grid Engine open source project by mid of December. The new technology is called Grid Engine Service Domain Management. It is an entirely new paradigm that provides policy and demand-based re-allocation of arbitrary resources across service domains. Service domains are totally autonomous Grids which are controlled by a workload management facility, such as Grid Engine, but also by arbitrary other service infrastructures like application servers or web servers. The Grid Engine Service Domain Manager allows the control of the resource allocation to each of these services in an automated fashion while preserving their full autonomy.

The second addition we are making to the Grid Engine open source project is the Grid Engine Accounting and Reporting Console. It has been previously a closed-source part of the Sun N1 Grid Engine product and provides web-based accounting, reporting and diagnostics. The Grid Engine Accounting and Reporting Console functionality stores accounting date in a standard SQL database and thus features an open interface for integration.


Gt:
What are Sun's plans for Grid computing in 2007? What are you most excited about?

FERSTL: We'll be further driving adoption of our technology through our open source efforts. On the Grid Engine side, the Grid Engine Service Domain Management will provide a new and exciting platform for integration and contributions plus it will open up completely new Grid application opportunities.

One area I'm particularly keen to see evolving is the combination of our Service Domain Management with virtualization technologies at all levels, be it server, network or storage virtualization. Virtualization turns application frameworks and infrastructure components into commodity appliances. The Grid Engine Service Domain Management will make it possible to have as many of those appliances as needed by a service and to have them equipped with the appropriate physical resources.


Gt:
What is your overall sense of the popularity of utility computing? Do you see its role expanding in the next few years?

The adoption of utility computing is largely dependent upon trust, security and legislation in and around utility grids. It is not so much a technical issue. Given enough legal freedom to operate and utilize utility grids plus sufficient trust and security, there is not doubt in my mind that utility computing will be a thriving business.

But even if there are issues which restrict the applicability of public utility grids, I'm convinced that the operational models and the corresponding technologies required for utility grids will become an important part of the next generation Grid architectures.


Gt:
Would you like make any additional comments?

FERSTL: I've been in what's now the Grid market for over 13 years. Some people ask me whether it's not getting boring. Quite the contrary is true! I've never seen so much potential in this space than today. And I'm talking about very tangible business potential. And even then, I still feel like we are just at the beginning! How could I get bored? I'm excited to be part of this movement and am proud to be with a team and with a company that makes a difference.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire