Government Computing: The Case for Private Cloud

By Jean-Paul Bergeaux

October 5, 2012

Federal IT departments are faced with some tough challenges these days. Not only are budgets constrained, but also mandates are starting to stack up like the tax code. One of the most talked about is the cloud-first mandate, the push to make IT-as-a-Service the standard procurement mechanism.

eagle in the cloudsWhile there are many types of clouds, a private cloud is going to be the best option for government agencies seeking to comply with federal mandates.

Cost savings myth

The myth that public cloud offerings are going to save the government money still persists. Outside the beltway, however, organizations of similar size to government agencies have figured out that renting IT doesn’t provide a lower total cost of ownership (TCO) than purchasing does. Once an IT organization – government or private industry – grows to a certain size, the economies of scale of public cloud services are not much better, leaving little room for the service provider’s profit margin. This is no secret and is being discussed in board rooms, conference halls and in LinkedIn forums. Cloud organizations are defending their savings story with aggressive marketing campaigns to persuade enterprise customers to still consider public cloud options. Their arguments often claim that hard ROI calculations are not complete and that intangible benefits that cannot be captured in just dollar comparisons have to be added. Ignoring the smoke and mirrors feel of these arguments, the basic fact is that if the public cloud hype of “tremendous savings!” were true, these arguments wouldn’t be necessary. Also ignored is the introduction of new problems and costs, such as WAN network bandwidth and new security challenges.

Still not secure

Speaking of security concerns the recent hacking incident related to Amazon and Google accounts should highlight a problem often not discussed about public cloud. When information is inside of the organization, the concern about accounts and passwords is muted. Moving the data outside the IT firewalls opens the information up to human mistakes in account management. This problem is unrelated the traditional security concerns of a public cloud solution. Those traditional security concerns about public cloud environments are well-discussed, but still not addressed by most providers. There are some public cloud service providers that meet all the federal government’s FISMA guidelines, but the costs are so significantly higher than the general population of cloud providers, it’s shocking. GSA’s FedRAMP attempted to design a solution, but until contracts offered by the majority of cloud service providers can meet federal security requirements, it will just be words on paper. At least a few agencies have publicly had to admit that their cloud contracts put them out of security mandate compliances. When given the choice of a security mandate or a cloud mandate, security should trump.

FOIA compliance impossible

Some of the major cloud providers do not offer information assurance in a Freedom of Information Act (FOIA) request. There is no way to track the history of data or show that data has not been deleted. This puts agencies in danger of being unable to comply with the law, not just a mandate. Agencies and private companies have been fined and punished for not returning court requested information, even if it was on accident. This is no trivial issue. Transparency in government laws and mandates have been issued by congress and affirmed by the courts.

Lack of Disaster Recovery (DR) and Continuity of Operations (COOP)

In most (though not all) cloud service contracts, there is a service-level agreement (SLA) that requires a specific uptime, but no remedies if this is not met. The contracts do not specify how these SLAs will be met, or how many copies of the data will be made or even how they will be accessible. And if the service provider goes down, there is no way for the government agency to recover until the service provider itself recovers. Recent high-profile outages should increase these concerns, but more important is the concern of lost data to hacked and deleted information. Data on an internal system could possibly be recovered with backups or even hard drive recovery services, but if the data was deleted with the correct account privileges, it’s unclear how the cloud provider can ensure recovery.

Again, there are cloud providers that do offer these options, but they are very costly. The other option is to contract a second cloud provider as a backup. This only works if both the primary and secondary cloud provider both adhere to the standards of a major hypervisor manufacturer and can continually keep the copies up-to-date. At this time, no agency seems to have been able to set this up successfully. At least one agency attempted, but had to pull the contract of the second provider because it was not technically possible to host the information.

Virtualization and modernization will save money without breaking mandates

In January of 2012, a survey on virtualization found that only 37 percent of government servers had been virtualized. It is estimated that increasing that rate by just 26 percent could save an additional $23 billion by 2015. That goal is very attainable and could be made even more productive through modernization and virtualization of end-user systems. These efforts would not add problems that break current mandates.

While public, private and hybrid solutions can all be used by government agencies facing a cloud-first mandate, private clouds that rely on virtualization and other modernization techniques are a good place to start. A hybrid system – one that uses public cloud providers as a DR or COOP target – can provide additional benefit and cost-savings. Secondary sites have different utilization rates, access methods and cost structures, which makes them a good fit for this purpose. If the cloud provider adheres to major hypervisor standards to manage data across the two platforms, the mixed-model approach could provide the most cost-efficient way to meet mandates.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This