A Distributed Happy New Year

By By Tom Gibbs, Contributing Author

February 12, 2007

This past December the good folks at Tabor Communications asked me to write a year end summary of the state of the Grid with a look ahead into 2007. Unfortunately, I was swamped with business unrelated to the Grid and had to decline. As I came up for air I realized that as with standards in computing the good thing about standard annual calendars is that there are quite a few to choose from. So, I'll use the upcoming celebration of the lunar Chinese New Year as the milestone to ruminate on the recent past and near future of The Grid.

Most of us take the luni-solar Gregorian calendar and fixed time zones with a baseline along the Greenwich Prime Meridian for granted, but the establishment of both was far from trivial. Given my Irish heritage I considered using the Celtic calendar, but the Chinese New Year is much more popular, and so darned convenient given the deadline, that as with most standards that succeed I chose to follow the path of least resistance.

It's also convenient that the subject of standard calendar and clock time aligns with my number one Grid highlight from 2006, where the potential for divergence or worse in the standards community was averted with the formation of a unified body in the form of the Open Grid Forum. The OGF led by Mark Linesch from HP unites a critical mass of large IT vendors along with the scientific community for the first time with a clear focus, which is clearly stated in their mission statement to “accelerate Grid adoption to enable business value and scientific discovery by providing an open forum for Grid innovation and developing open standards for Grid software interoperability.” Now while I believe emergence of the unified OGF was a watershed event, many of my colleagues were grousing before, during and after. Some very notable Grid luminaries and business leaders had gotten so frustrated with all the meandering that they had begun to wonder if things might work out in the absence of formal standards.

I understand the concern and also think de facto standards where the community votes by mass adoption are ok, but in some cases I believe you need a declarative standard and the formation of OGF gives the community the forum to make this happen. It's a big deal and I hope the community weighs in with the effort required to make it work in 2007. As I'll point out later some of the luster is coming off of the term Grid and there is still confusion on what Grid is, so the OGF has its work cut out for it, but I'll go out on a limb and predict big things from OGF this year.

Part of this prediction is unbridled optimism given the importance and challenge that come with global standards. If we return to the topic in the title, which is the multiple annual calendars and the related topic of time zones, the history illustrates how hard it is to devise a standard that a core group agrees to and then make it stick across a wider general population.

Global time and calendar standards also illustrate how important a unified standard is. Imagine global logistics with multiple calendars all on a different cycle and in the absence of standard global time zones. It's challenging enough to manage global competition even with standard dates and times. I'm convinced that we will all look back in 5 years or so and there will be key standards and tools that we'll all agree helped bring the vision of Grid to broad adoption.

For a quick history… the Gregorian calendar we now use as a global standard was decreed by the good Pope Gregory XIII in 1582 after a long and vigorous debate among the smartest minds in the Catholic universe. The principal theorists were Aloysius Lilius and Christopher Clavius who wrote volumes of nearly 1000 pages in an effort to defend their work. Think Ian Foster and Charlie Catlett who might have been driven by DARPA and the NSF to save Easter.

That was the motivation for all this effort. Easter was in danger. The discrepancy with the then lunar calendar and the Julian luni-solar version had caused Easter to drift by four or more days, and had different factions across the Catholic community celebrating in different weeks. This might have been a problem under any circumstances but Easter is the most important holiday in the liturgical calendar and comes at a time when pagans the world over celebrate the “rites of spring.”

The fragmentation across the Catholic community left things wide open for the pagans to capture the public imagination with a fixed observance of the rites of spring on the first full moon following the vernal equinox. For those inexperienced in these affairs rent a copy of the movie “Eyes Wide Shut” or peruse the shorter but more accurate scene from the movie “The Da Vinci Code” where Sophie accidently witnesses her grandfather partaking in the Hieros Gamos ritual and you'll get an idea what leaders of the church were up against. As Marvin Gaye might croon to pagans everywhere… “Let's Get it on!”

In the same way that I believe that Grid computing and communications are critical to the long term survival of information technology, so a common calendar that kept things straight with Easter was deemed critical to the leaders of the Catholic Church. Let history also show that no matter how well researched a standard is, and no matter how well aligned the inner circle of a community is to drive adoption — making a standard stick across a broad constituency is difficult at best!

In the case of the calendar we've come to know and use, only Spain, Portugal, Italy and Poland along with Holland (who was the only non-Catholic country) adopted the calendar in 1582. The effort to get the rest of the world to agree required roughly 350 years. England waited 70 years when that fateful Wednesday, September 2, 1752 was followed by Thursday, September 14, 1752. China was the last country to adopt in 1929, but still numbered the months according to a modified era system until 1949.

Given the religious and cultural implications perhaps it's no surprise that gaining global agreement on a calendar whose initial purpose was to reconcile a common date for Easter would be a tough slog, but the same situation occurred with time zones, which would seem at first glance to be tied to much less emotion.

Standard time zones were first proposed by the Great Western Railway of Britain in 1840. The inner circle achieved general consensus in eight years when all the railroad companies in Britain agreed. However it took 40 years from the initial proposal until the standard time zones were enacted into law in the U.K. Even after this occurred it was not uncommon for clock towers in some towns in the U.K. to sport two hour hands. Some cynics in England concur that this was the only time the British Rail system ran on time.

The U.S. and Canada followed about the same path, where standard time was introduced by the railroads in the 1880's but not enacted into law until 1918. As simple as this might seem many communities resisted. In Ohio and Michigan for example there were over two dozen time zones, and Detroit didn't agree to the common standard time until 1905 after voting it into law in 1900.

What can be drawn from these examples is that as important as standard time zones and annual calendars are to global commerce, widespread adoption will be plagued by politics and plain stubbornness. The formation of the OGF was a huge step towards heading off the political problems that often ensue in IT when the large vendors are on different sides and in some cases disconnected from the technical and scientific community. Hopefully the forum that OGF continues to build will also help offset some of the other elements of human nature that prevent or delay consensus toward objectives that can have a big impact on economic progress.

So while the formation of OGF was specifically focused on the Grid community, the next big thing from 2006 was more generic — Tech got hot again. In fact some people correlated the words euphoria and technology in the same sentence that had nothing to do with pornography.

The singular event during 2006 undoubtedly was Google's acquisition of YouTube for $1.65 billion USD. Besides the fact that YouTube and its new parent Google both use distributed computing architectures that some might call a Grid, there is no direct relation here so let me explain.

Even though much of the software in the Grid community comes from Open Source efforts, ultimately the developers are engineers or scientists and these folks need jobs. They also need to get an education. The late economist Milton Friedman wrote the book titled “There is No Such Thing as a Free Lunch” and in this case engineers and scientists don't grow on trees. They are developed after years of hard study and practice, which is motivated in most cases by the promise of employment. Whether they work for commercial or academic ends they need capital and that comes from investment. There is another byproduct of a hot tech market which is that geeks are cool again, and hence the number of young people who might pursue technical studies typically goes up.

When tech is soft the investment levels are lower and tend to come from large corporations and government agencies. This is a double negative since the overall amount of investment is low and much lower when you factor in the amount of capital available for creative out of the box ideas.

There are still some very challenging problems that need to be solved before the full potential of Grid computing and communications can be realized and we need more and more smart people joining the ranks of the Grid community who are encouraged to come up with creative new solutions to some old and difficult problems. When tech gets hot there is more available capital and it's directed in areas that offer the opportunity for greater creativity. In the case of the tech getting hot in 2006 it got hot around the architectural concept known as Service-Oriented Architecture and the use of web services in large enterprise, and it got white hot around the service delivery approach known as Web 2.0 which took off in the consumer market and then spilled over to gather momentum in small to medium business.

This had big implications for Grid since distributed computing, communications and storage make up the cornerstone of each of these hot trends that focus on a very wide range of applications beyond the historic, scientific and number-crunching intensive usage models which were the focus of many of the early adopters of the technology known as Grid. The Grid community has been interested in these usage models for some time, where the focus was on distributed data (Data Grids) and collaboration among a wide range of distributed users. So as with the essence of the Grid itself the community that is spearheading the development is ahead of the usage models that are appearing commercially just now. What happened in '06 was that a very wide array of new usage models took off.

One of the hottest new applications or services that emerged in 2006 was the online virtual reality game “Second Life,” which is developed and delivered over the web by Linden Labs. One example of this trend entering the mainstream was that IBM CEO Sam Palmisano developed multiple avatars for the game. One is the buttoned down Sam for business and the other is causal Sam. Interestingly the users of Second Life refer to the server based game space as “the Grid.” Some purists may argue that the users are incorrectly referring to the computer architecture as a Grid but I'll get into that later. The fact that YouTube, Second Life, GoogleEarth, Myspace etc. all run on some level of distributed architecture that are concerned more with data distribution and delivery is a big issue for the evolution of the Grid.

The origins of the Grid were formed in the primordial soup of simulation assisted scientific discovery and the bulk of the focus was on numerical computation. As Grid evolves as a supporting infrastructure for business and consumers its focus is shifting to internet assisted data distribution, rich user interfaces and discovery. 2006 was a watershed year here and I expect more — in fact much more to unfold in 2007 — as the service offerings that originated with a small group of consumers and small businesses scale to the levels of simulation assisted science as the number of users grows dramatically and competition for their attention drives the richness and of the interactivity. The issues that early Grid pioneers had with cost effective throughput will manifest themselves in 2007 as cost effective competitive differentiation and quality of service. As this occurs it will be critical for the Grid community to come down from the Quasar, take a brief respite in the search for Higgs Boson, and embrace this new breed of user who may wear ponytails too but they are on the sides of their head, and they are usually really cute and giggle a lot.

My last observation from 2006 was that Grid as a name fell out of favor just as the technology established a solid foundation in multiple industries. Unfortunately, marketing is an art that is difficult to apply to science. I often amuse myself wondering what the telescope would be called if Galileo had needed to raise an IPO to pay off Cardinal Bellermine to get out from under house arrest.

The great bard said that “a rose by any other name would smell as sweet.” Given the title of this article and the fact that some scholars believe that Sir Francis Bacon is the real author known as Shakespeare, I might offer the twist “that a pig by any other name would still taste great with eggs.” In this case — however cynical I may be about what people call distributed computing and communications architectures that allow seamless virtualization — it is important that the community address the issue that most of the buying public is confused by the term Grid and how it gets used in a sentence.

It's inarguable that the vendor community is distancing itself from the term Grid. In some cases the term Grid has been airbrushed from their marketing collateral like the photograph of a politician in the former Soviet Union who fell out of favor with the Kremlin. In others the term has been embedded with explanatory verbiage like virtualization and data center automation, or a modifier added such as data Grid.

A couple of years ago the term Grid by itself was hot. The downside of this was that most of the heat was hype. The upside was that the hype drove investment in the general direction if not right on the money for new Grid solutions. The investment paid off in 2006 as the number of robust Grid solutions being deployed in real business applications went from a handful to almost commonplace in some industries. A deeper look at the industrial strength solutions illustrates that the individuals responsible for the implementation either came from the inner circle of the Grid community or were very familiar with the technology.

My firm belief from anecdotal data is that the trouble occurred as the next wave of adopters started to come online. They didn't have the history with the term and found it confusing. I also believe from personal observation that the reason the next wave of adopters found the term problematic is that this wave wants to buy something tangible and likes the terminology to be literal. Grid is a metaphor for an abstract architectural concept and just doesn't work at that level.

Hence I don't see the term Grid making a rebound as a marketing term or slogan. I predict that at that level the term will move further from headline to byline and then be absorbed into the deeper description of the new solutions that offer distributed seamless virtualization.

In the end I think this will be very healthy for the community if they act early and often to position themselves and the technology as it was originally conceived. It should be an umbrella term that embraces all of the underlying technology required to deliver the grand vision of distributed data, computing and communications.

In summary, 2006 was a fantastic year for Grid computing and communications. There is a unified standards body that includes the core inner circle of scientific research and the critical mass of IT vendors who are building products for the wide set of users that cross scientific, consumer and small to large business. Tech got hot again and it got hot in an area that is demanding the technology and expertise of individuals and vendors from the Grid community.

While we probably won't see IPOs for Grid products or companies in 2007, we did see IPOs and acquisitions for companies that use Grid solutions, and I firmly believe that we will see continued growth in solutions that millions of people use every day that rely on Grid solutions although they may never call them that.

Now some readers may have been confused by my reference to “The Bard” earlier on, and thought I meant Bob Dylan not William (aka Sir Francis Goes Great with Eggs) Shakespeare. As I close I'm thinking that “All Along the Watchtower” might be fitting as the leaders of the Grid community work to drive clear standards. But I'll focus instead on the IT marketing folks chartered with trying to figure out how to position new solutions in the wildly competitive global marketplace in 2007 — perhaps they can find a path in these lyrics…

Well Mack the Finger said to Louie the King
I got forty red white and blue shoe strings
And a thousand telephones that don't ring
Do you know where I can get rid of these things
And Louie the King said let me think for a minute son
And he said yes I think it can be easily done
Just take everything down to Highway 61


About Tom Gibbs

Tom Gibbs is Managing Partner at Vx Ventures, a global consulting and investment partnership that focuses on the application of new IT architectures such as Grid computing and Service Oriented Architecure, RFID and Sensor Networks to help communities and companies accelerate economic growth and improve the social well being of their employees and citizens. Prior to Vx Ventures Tom was the director of worldwide strategy and planning in the solutions market development group at the Intel Corporation where he was responsible for developing global industry marketing strategies, building cooperative market development, and marketing campaigns with Intel's partners worldwide. Tom joined Intel in 1991 in the Scalable Systems division as a sales manager for their family of massively parallel computers where he won numerous awards for sales achievement and research and development programs. He then worked in Intel's Enterprise Server group, where he was responsible for business growth with all OEM customers with products that scaled greater than 4-way. Finally, just prior to joining the Solutions Market Development group, he was in the Workstation Products group — responsible for all board and system product development and sales. Prior to Intel, Gibbs held technical marketing management and industry sales management positions with FPS Computing, and engineering design and development for airborne radar systems at Hughes Aircraft Company. He is a graduate in electrical engineering from California Polytechnic University in San Luis Obispo and was a member of the graduate fellowship program at Hughes Aircraft Company, where his areas of study included non-linear control systems, artificial intelligence and stochastic processes. He also previously served on the President's Information Technology Advisory Council for open source computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This