Calamities, Contingencies and the Cloud

By Nicole Hemsoth

March 14, 2011

As the real and figurative dust begins to settle in Japan following the massive earthquake and tsunami, the grim evaluation of damage is just beginning in terms of life, property and increasingly, business.

Today Japanese markets went live and back to work and to some extent, so did some of the country’s largest companies. Honda, Sony and others were forced to shut down for an extended period but otherwise Japan has been trying to push forward, if not with a sense of sad defiance in the face of the mounting tragedies—human, environmental, structural and otherwise.

The assessments extend far beyond Japan’s borders, at least on the human front as millions look to cloud-based platforms to share and receive important news and information from a broad spectrum of worldwide sources.   

As Dr. Jose Luis Vazquez-Poletti discussed this morning, “following the first hints of news about the tragedy in Japan, people around the world turned to the Internet to find different formats for information—not just mass media coverage, but also firsthand impressions left on personal websites, blogs and social media outlets…a combination social networks and the principles of cloud computing became the primary source for information gathering and sharing.”

Indeed, the convergence of cloud computing and the incredible breadth of tools to harness it for massive, real-time communication and collaboration shows the power of ICT developments like cloud-based services to aid during times of national emergency.

This communications side of the cloud story is striking in its scope; families and agencies sharing updates in near real-time, distributed coordination of search and rescue operations across any number of hosted platforms. However, there is another angle of cloud computing that emerges during major crises.

Reliance on clouds as the main artery for communications and even business continuity following mobile phone and related disruptions is advantageous but what if those networks or data storehouses are obliterated or at worst, temporarily knocked out following exhaustion of backup power?

Just as with many other critical elements of infrastructure, a few of Japan’s datacenters have been affected by the tragedy. Rather than being due to direct damage to structures, however, the failures appear to be due to rolling blackouts and extended power outages. While they are not as widespread as one might imagine given the scope and magnitude of the damage, this is nonetheless causing issues for those who rely on cloud-based services in the country.

ZDnet Japan has been maintaining an updated list of affected datacenters with short descriptions of current challenges showing that some datacenters are faring better than others. Overall, despite some serious breakdowns in ICT infrastructure, the country’s clouds have been protected by a variety of power and data backup methods.

According to reports, among the hardest hit in the data market was NTT Communications—one of Japan’s largest providers of data and communication services. On Friday they lost their IP-VPN connection and were closely monitoring the exterior of the building holding one of its datacenters. In a statement issued on Friday the company noted that “due to earthquakes in the Tohoku region NTT has failed in some of our services.” NTT apologized to its customers but claimed that backup power supplies for its other datacenters have extended capabilities.

Announcements from the Japanese Ministry of Internal Affairs have emerged about severed communication networks, including KDDI’s undersea cables.

Despite these and other major ICT infrastructure failures, there are a number of companies reassuring customers that even in the face of power loss their data is still safe.

Earlier this month Amazon Web Services announced the availability of its cloud computing services to the Tokyo area with the launch of a new datacenter. While the exact location of the data storehouse was withheld, in a statement about its new Japanese reach, one of Amazon’s spokespeople behind the move stated that “developers in Japan told me that latency and in-country data storage are of great importance to them.”

It is quite likely that, based on these specific concerns and the fact that they were highlighted in a relatively sparse release, the datacenter is located somewhere in the heart of Tokyo, which suffered a great deal of damage although not as much as other coastal cities touched by the massive tsunami.

According to Amazon, however, the datacenter has emerged unscathed and for all intents and purposes, its business as usual—at least in terms of its cloud offering in the region. Furthermore, as one might imagine, AWS has some exhaustive backup and recovery plans, including stores off-site and off-continent.

On its status page, which shows real-time outage or interruption events by region, Amazon’s services all seem to have the green light. However, it notes that while they do not believe there will be interruption is service, it is a possibility. As the company’s message to Asia-Pacific AWS users states:

 “There are planned Tokyo Electronic outages scheduled over the next few weeks, starting Monday morning (Japan time). We have been re-validating our back-up power capability so that customers have the least interruption possible.”

A number of U.S.-based companies are jumping into the fray to offer assistance to businesses, non-profits and government agencies via cloud-based software.  For instance, yesterday IBM Japan announced that it would be providing free LotusLive services until the end of July to ensure the necessary “means of information sharing and email targeted at local governments and nonprofit organization for supporting browser-based activities.”

Japan’s leading internet provider IIJ has stated that it is providing free access to cloud-based resources from its unaffected datacenter location from a rapidly-deployed server setup in the Kansai area it claims will be unaffected by power outages and rolling blackouts. Although the translation is approximate, the company notes that “traffic information and safety confirmation as well as railways operation are supported in this infrastructure for delivering information as quickly as needed—IIG is doing all it can to to support various server engineers.”

Microsoft had an office in one of the worst affected areas, Sendai. In addition to offering words of concern and condolences, the company announced that it would be providing monetary and software donations to Japan.

According to a report, this assistance includes free incident support for those with damaged facilities and “free temporary software licenses for customers, non-profits and relief agencies.”

Microsoft has also opened a cloud-based disaster recovery portal on its Windows Azure for officials to use for collaboration and communications.

Similar efforts were underway, although on a smaller scale, following New Zealand’s earthquake, which rocked Christchurch and put data backup worries on center stage. 

In fact, now that the tidal wave of shock is turning slowly into recognition of the gravity of the situation, today has sparked a number of conversations around the web highlighting the value of having a contingency  plan and reliable backup and recovery options. These have saved many of the datacenters, both in terms of backup power and datastores, but some companies that had been reliant on on-site systems might not have fared well.

Many of these same backup and recovery-related conversations emerged immediately following the Christchurch earthquake not long ago. ISC Research community manager Ullrich Loeffler predicted that many companies that were displaced after the tragedy were unlikely to reinvest in their own IT infrastructure. He stated that many of the companies that were forced to line up in queues to try to salvage hard drives and other physical information stores would begin considering the cloud option. Still, Loeffler made it clear that firms would turn to the cloud as a precautionary measure, explaining that “companies only tend to turn to cloud-based or hosted solutions when they need to refresh their systems.”

While Loeffler’s statement that the cloud is not a precautionary measure might ring true in the abstract, there were a number of tales of cloud-based backup and recovery solutions being deployed directly as precautionary measures. This was especially the case in Christchurch where businesses were given a wake-up call in the form of an initial, less severe quake that rocked the town—and swayed the confidence of a number of businesses with mission-critical data stores at the heart of their operations.

The New Zealand Herald reported on a number of companies that found that their decision to deploy cloud-based solutions saved their businesses following the destruction of their offices. Software company EMDA, which supplies software for supply chain and manufacturing businesses had just reevaluated its backup and recovery plan to include both on and off-site backups following the first earthquake.

Although the tragedy could have sparked a much more serious data problem, especially if the epicenter had been closer to Tokyo where a number of datacenters and communications hubs are centered, it does serve as a reminder about the value and risks associated with cloud-based business models. Chances are any organization that has decided to put all or some of its data in the cloud, especially public clouds, has granted significant attention to the issue of reliability and backup. Still, for smaller companies this might be a secondary consideration.

It is difficult to focus on this one element of a tragedy that is so broad in scope that it is almost impossible for the mind to process. We can take our cues from the strong decision to move forward with markets on this Monday following such dramatic loss of life and property, however, and look ahead to see how the challenges from this event can help other countries better prepare for disaster on the cloud and communications level.

Just as the earthquake and tsunami in Japan has caused a massive look inward for countries reliant on nuclear power, this should also be a living example of considering contingency planning options for data protection and loss prevention. 
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This