A Distributed Happy New Year

By By Tom Gibbs, Contributing Author

February 12, 2007

This past December the good folks at Tabor Communications asked me to write a year end summary of the state of the Grid with a look ahead into 2007. Unfortunately, I was swamped with business unrelated to the Grid and had to decline. As I came up for air I realized that as with standards in computing the good thing about standard annual calendars is that there are quite a few to choose from. So, I'll use the upcoming celebration of the lunar Chinese New Year as the milestone to ruminate on the recent past and near future of The Grid.

Most of us take the luni-solar Gregorian calendar and fixed time zones with a baseline along the Greenwich Prime Meridian for granted, but the establishment of both was far from trivial. Given my Irish heritage I considered using the Celtic calendar, but the Chinese New Year is much more popular, and so darned convenient given the deadline, that as with most standards that succeed I chose to follow the path of least resistance.

It's also convenient that the subject of standard calendar and clock time aligns with my number one Grid highlight from 2006, where the potential for divergence or worse in the standards community was averted with the formation of a unified body in the form of the Open Grid Forum. The OGF led by Mark Linesch from HP unites a critical mass of large IT vendors along with the scientific community for the first time with a clear focus, which is clearly stated in their mission statement to “accelerate Grid adoption to enable business value and scientific discovery by providing an open forum for Grid innovation and developing open standards for Grid software interoperability.” Now while I believe emergence of the unified OGF was a watershed event, many of my colleagues were grousing before, during and after. Some very notable Grid luminaries and business leaders had gotten so frustrated with all the meandering that they had begun to wonder if things might work out in the absence of formal standards.

I understand the concern and also think de facto standards where the community votes by mass adoption are ok, but in some cases I believe you need a declarative standard and the formation of OGF gives the community the forum to make this happen. It's a big deal and I hope the community weighs in with the effort required to make it work in 2007. As I'll point out later some of the luster is coming off of the term Grid and there is still confusion on what Grid is, so the OGF has its work cut out for it, but I'll go out on a limb and predict big things from OGF this year.

Part of this prediction is unbridled optimism given the importance and challenge that come with global standards. If we return to the topic in the title, which is the multiple annual calendars and the related topic of time zones, the history illustrates how hard it is to devise a standard that a core group agrees to and then make it stick across a wider general population.

Global time and calendar standards also illustrate how important a unified standard is. Imagine global logistics with multiple calendars all on a different cycle and in the absence of standard global time zones. It's challenging enough to manage global competition even with standard dates and times. I'm convinced that we will all look back in 5 years or so and there will be key standards and tools that we'll all agree helped bring the vision of Grid to broad adoption.

For a quick history… the Gregorian calendar we now use as a global standard was decreed by the good Pope Gregory XIII in 1582 after a long and vigorous debate among the smartest minds in the Catholic universe. The principal theorists were Aloysius Lilius and Christopher Clavius who wrote volumes of nearly 1000 pages in an effort to defend their work. Think Ian Foster and Charlie Catlett who might have been driven by DARPA and the NSF to save Easter.

That was the motivation for all this effort. Easter was in danger. The discrepancy with the then lunar calendar and the Julian luni-solar version had caused Easter to drift by four or more days, and had different factions across the Catholic community celebrating in different weeks. This might have been a problem under any circumstances but Easter is the most important holiday in the liturgical calendar and comes at a time when pagans the world over celebrate the “rites of spring.”

The fragmentation across the Catholic community left things wide open for the pagans to capture the public imagination with a fixed observance of the rites of spring on the first full moon following the vernal equinox. For those inexperienced in these affairs rent a copy of the movie “Eyes Wide Shut” or peruse the shorter but more accurate scene from the movie “The Da Vinci Code” where Sophie accidently witnesses her grandfather partaking in the Hieros Gamos ritual and you'll get an idea what leaders of the church were up against. As Marvin Gaye might croon to pagans everywhere… “Let's Get it on!”

In the same way that I believe that Grid computing and communications are critical to the long term survival of information technology, so a common calendar that kept things straight with Easter was deemed critical to the leaders of the Catholic Church. Let history also show that no matter how well researched a standard is, and no matter how well aligned the inner circle of a community is to drive adoption — making a standard stick across a broad constituency is difficult at best!

In the case of the calendar we've come to know and use, only Spain, Portugal, Italy and Poland along with Holland (who was the only non-Catholic country) adopted the calendar in 1582. The effort to get the rest of the world to agree required roughly 350 years. England waited 70 years when that fateful Wednesday, September 2, 1752 was followed by Thursday, September 14, 1752. China was the last country to adopt in 1929, but still numbered the months according to a modified era system until 1949.

Given the religious and cultural implications perhaps it's no surprise that gaining global agreement on a calendar whose initial purpose was to reconcile a common date for Easter would be a tough slog, but the same situation occurred with time zones, which would seem at first glance to be tied to much less emotion.

Standard time zones were first proposed by the Great Western Railway of Britain in 1840. The inner circle achieved general consensus in eight years when all the railroad companies in Britain agreed. However it took 40 years from the initial proposal until the standard time zones were enacted into law in the U.K. Even after this occurred it was not uncommon for clock towers in some towns in the U.K. to sport two hour hands. Some cynics in England concur that this was the only time the British Rail system ran on time.

The U.S. and Canada followed about the same path, where standard time was introduced by the railroads in the 1880's but not enacted into law until 1918. As simple as this might seem many communities resisted. In Ohio and Michigan for example there were over two dozen time zones, and Detroit didn't agree to the common standard time until 1905 after voting it into law in 1900.

What can be drawn from these examples is that as important as standard time zones and annual calendars are to global commerce, widespread adoption will be plagued by politics and plain stubbornness. The formation of the OGF was a huge step towards heading off the political problems that often ensue in IT when the large vendors are on different sides and in some cases disconnected from the technical and scientific community. Hopefully the forum that OGF continues to build will also help offset some of the other elements of human nature that prevent or delay consensus toward objectives that can have a big impact on economic progress.

So while the formation of OGF was specifically focused on the Grid community, the next big thing from 2006 was more generic — Tech got hot again. In fact some people correlated the words euphoria and technology in the same sentence that had nothing to do with pornography.

The singular event during 2006 undoubtedly was Google's acquisition of YouTube for $1.65 billion USD. Besides the fact that YouTube and its new parent Google both use distributed computing architectures that some might call a Grid, there is no direct relation here so let me explain.

Even though much of the software in the Grid community comes from Open Source efforts, ultimately the developers are engineers or scientists and these folks need jobs. They also need to get an education. The late economist Milton Friedman wrote the book titled “There is No Such Thing as a Free Lunch” and in this case engineers and scientists don't grow on trees. They are developed after years of hard study and practice, which is motivated in most cases by the promise of employment. Whether they work for commercial or academic ends they need capital and that comes from investment. There is another byproduct of a hot tech market which is that geeks are cool again, and hence the number of young people who might pursue technical studies typically goes up.

When tech is soft the investment levels are lower and tend to come from large corporations and government agencies. This is a double negative since the overall amount of investment is low and much lower when you factor in the amount of capital available for creative out of the box ideas.

There are still some very challenging problems that need to be solved before the full potential of Grid computing and communications can be realized and we need more and more smart people joining the ranks of the Grid community who are encouraged to come up with creative new solutions to some old and difficult problems. When tech gets hot there is more available capital and it's directed in areas that offer the opportunity for greater creativity. In the case of the tech getting hot in 2006 it got hot around the architectural concept known as Service-Oriented Architecture and the use of web services in large enterprise, and it got white hot around the service delivery approach known as Web 2.0 which took off in the consumer market and then spilled over to gather momentum in small to medium business.

This had big implications for Grid since distributed computing, communications and storage make up the cornerstone of each of these hot trends that focus on a very wide range of applications beyond the historic, scientific and number-crunching intensive usage models which were the focus of many of the early adopters of the technology known as Grid. The Grid community has been interested in these usage models for some time, where the focus was on distributed data (Data Grids) and collaboration among a wide range of distributed users. So as with the essence of the Grid itself the community that is spearheading the development is ahead of the usage models that are appearing commercially just now. What happened in '06 was that a very wide array of new usage models took off.

One of the hottest new applications or services that emerged in 2006 was the online virtual reality game “Second Life,” which is developed and delivered over the web by Linden Labs. One example of this trend entering the mainstream was that IBM CEO Sam Palmisano developed multiple avatars for the game. One is the buttoned down Sam for business and the other is causal Sam. Interestingly the users of Second Life refer to the server based game space as “the Grid.” Some purists may argue that the users are incorrectly referring to the computer architecture as a Grid but I'll get into that later. The fact that YouTube, Second Life, GoogleEarth, Myspace etc. all run on some level of distributed architecture that are concerned more with data distribution and delivery is a big issue for the evolution of the Grid.

The origins of the Grid were formed in the primordial soup of simulation assisted scientific discovery and the bulk of the focus was on numerical computation. As Grid evolves as a supporting infrastructure for business and consumers its focus is shifting to internet assisted data distribution, rich user interfaces and discovery. 2006 was a watershed year here and I expect more — in fact much more to unfold in 2007 — as the service offerings that originated with a small group of consumers and small businesses scale to the levels of simulation assisted science as the number of users grows dramatically and competition for their attention drives the richness and of the interactivity. The issues that early Grid pioneers had with cost effective throughput will manifest themselves in 2007 as cost effective competitive differentiation and quality of service. As this occurs it will be critical for the Grid community to come down from the Quasar, take a brief respite in the search for Higgs Boson, and embrace this new breed of user who may wear ponytails too but they are on the sides of their head, and they are usually really cute and giggle a lot.

My last observation from 2006 was that Grid as a name fell out of favor just as the technology established a solid foundation in multiple industries. Unfortunately, marketing is an art that is difficult to apply to science. I often amuse myself wondering what the telescope would be called if Galileo had needed to raise an IPO to pay off Cardinal Bellermine to get out from under house arrest.

The great bard said that “a rose by any other name would smell as sweet.” Given the title of this article and the fact that some scholars believe that Sir Francis Bacon is the real author known as Shakespeare, I might offer the twist “that a pig by any other name would still taste great with eggs.” In this case — however cynical I may be about what people call distributed computing and communications architectures that allow seamless virtualization — it is important that the community address the issue that most of the buying public is confused by the term Grid and how it gets used in a sentence.

It's inarguable that the vendor community is distancing itself from the term Grid. In some cases the term Grid has been airbrushed from their marketing collateral like the photograph of a politician in the former Soviet Union who fell out of favor with the Kremlin. In others the term has been embedded with explanatory verbiage like virtualization and data center automation, or a modifier added such as data Grid.

A couple of years ago the term Grid by itself was hot. The downside of this was that most of the heat was hype. The upside was that the hype drove investment in the general direction if not right on the money for new Grid solutions. The investment paid off in 2006 as the number of robust Grid solutions being deployed in real business applications went from a handful to almost commonplace in some industries. A deeper look at the industrial strength solutions illustrates that the individuals responsible for the implementation either came from the inner circle of the Grid community or were very familiar with the technology.

My firm belief from anecdotal data is that the trouble occurred as the next wave of adopters started to come online. They didn't have the history with the term and found it confusing. I also believe from personal observation that the reason the next wave of adopters found the term problematic is that this wave wants to buy something tangible and likes the terminology to be literal. Grid is a metaphor for an abstract architectural concept and just doesn't work at that level.

Hence I don't see the term Grid making a rebound as a marketing term or slogan. I predict that at that level the term will move further from headline to byline and then be absorbed into the deeper description of the new solutions that offer distributed seamless virtualization.

In the end I think this will be very healthy for the community if they act early and often to position themselves and the technology as it was originally conceived. It should be an umbrella term that embraces all of the underlying technology required to deliver the grand vision of distributed data, computing and communications.

In summary, 2006 was a fantastic year for Grid computing and communications. There is a unified standards body that includes the core inner circle of scientific research and the critical mass of IT vendors who are building products for the wide set of users that cross scientific, consumer and small to large business. Tech got hot again and it got hot in an area that is demanding the technology and expertise of individuals and vendors from the Grid community.

While we probably won't see IPOs for Grid products or companies in 2007, we did see IPOs and acquisitions for companies that use Grid solutions, and I firmly believe that we will see continued growth in solutions that millions of people use every day that rely on Grid solutions although they may never call them that.

Now some readers may have been confused by my reference to “The Bard” earlier on, and thought I meant Bob Dylan not William (aka Sir Francis Goes Great with Eggs) Shakespeare. As I close I'm thinking that “All Along the Watchtower” might be fitting as the leaders of the Grid community work to drive clear standards. But I'll focus instead on the IT marketing folks chartered with trying to figure out how to position new solutions in the wildly competitive global marketplace in 2007 — perhaps they can find a path in these lyrics…

Well Mack the Finger said to Louie the King
I got forty red white and blue shoe strings
And a thousand telephones that don't ring
Do you know where I can get rid of these things
And Louie the King said let me think for a minute son
And he said yes I think it can be easily done
Just take everything down to Highway 61


About Tom Gibbs

Tom Gibbs is Managing Partner at Vx Ventures, a global consulting and investment partnership that focuses on the application of new IT architectures such as Grid computing and Service Oriented Architecure, RFID and Sensor Networks to help communities and companies accelerate economic growth and improve the social well being of their employees and citizens. Prior to Vx Ventures Tom was the director of worldwide strategy and planning in the solutions market development group at the Intel Corporation where he was responsible for developing global industry marketing strategies, building cooperative market development, and marketing campaigns with Intel's partners worldwide. Tom joined Intel in 1991 in the Scalable Systems division as a sales manager for their family of massively parallel computers where he won numerous awards for sales achievement and research and development programs. He then worked in Intel's Enterprise Server group, where he was responsible for business growth with all OEM customers with products that scaled greater than 4-way. Finally, just prior to joining the Solutions Market Development group, he was in the Workstation Products group — responsible for all board and system product development and sales. Prior to Intel, Gibbs held technical marketing management and industry sales management positions with FPS Computing, and engineering design and development for airborne radar systems at Hughes Aircraft Company. He is a graduate in electrical engineering from California Polytechnic University in San Luis Obispo and was a member of the graduate fellowship program at Hughes Aircraft Company, where his areas of study included non-linear control systems, artificial intelligence and stochastic processes. He also previously served on the President's Information Technology Advisory Council for open source computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Lab Behind the Record-Setting GPU ‘Cloud Burst’ Joins [email protected]’s COVID-19 Effort

May 20, 2020

Last November, the Wisconsin IceCube Particle Astrophysics Center (WIPAC) set out to break some records with a moonshot project: over a couple of hours, they bought time on as many cloud GPUS as they could – 51,000 – Read more…

By Staff report

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is somethin Read more…

By John Russell

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This