The Impact of Cloud Computing on Corporate IT Governance

By Bruce Maches

January 25, 2010

This is the second in a series of articles discussing the impact of cloud computing on IT governance. The first article dealt with more informal internal IT processes while this article examines clouds impact from the formal management IT governance/steering committee aspect.

While cloud computing is enabling some fundamental changes on how IT groups deliver services, from a corporate management viewpoint, the basic principles of IT governance still remain true. However, the advent of cloud computing is having an increasing impact on how the components of the governance process are executed. For the purpose of this article, we will use the COBIT model (Control OBjectives for Information and related Technology) that is comprised of five major process focus areas: Strategy Alignment, Value Delivery, Resource Management, Risk Management, and Performance Measurement.

Governance at its core is the effective management of the IT function to ensure that an organization is realizing maximum value from its investments in information technology. Many companies, especially those with considerable IT budgets, have implemented significant internal IT governance procedures to manage their IT investment portfolio. This governance function provides the processes and framework for the management team to analyze, understand, and manage the level of return on the organizations technology investments. Industry studies show that on average, companies with effective IT governance processes in place average 5-7 percent less in equivalent IT spend to deliver the same functionality as compared to those companies that do not.

Any proper IT governance function also requires active management participation, the proper forum to make IT related decisions, and effective communication between the IT organization and the company’s management team. While these factors are critical to creating a successful IT governance function, there are five essential areas of process focus as spelled out in the COBIT model, which are described here:

  1. Strategic Alignment: This focuses on ensuring the linkage of business and IT plans; defining, maintaining and validating the IT value proposition; and aligning IT projects and operations with enterprise operations.
     
  2. Value Delivery: This is about executing the value proposition throughout the delivery cycle, ensuring that IT delivers the promised benefits against the strategy, concentrating on optimizing costs and proving the intrinsic value of IT.
     
  3. Resource Management: This is about the optimal investment in, and the proper management of, critical IT resources: applications, information, infrastructure and people. Key issues relate to the optimization of system knowledge and technical infrastructure.
     
  4. Risk Management: This requires risk awareness by senior corporate officers, a clear understanding of the enterprise’s appetite for risk, understanding of compliance requirements, transparency about the significant risks to the enterprise and embedding of risk management responsibilities into the IT organization.
     
  5. Performance Measurement: This tracks and monitors strategy implementation, project completion, resource usage, process performance and service delivery, using, for example, balanced scorecards that translate strategy into action to achieve goals measurable beyond conventional accounting.

If the IT governance framework isn’t implemented and managed correctly, this can adversely impact how well IT delivers on its commitments to its customers along with how IT is perceived within the organization. Lack of effective IT strategy, governance and oversight can cause continued issues with project overruns or even outright failures, project stakeholder dissatisfaction, and reduced business value received in relation to the resources expended. Companies that properly manage their IT function operate with a higher level of certainty that they are receiving an appropriate level of value from their investments in information technology. They also have the ability to ensure that the IT group is working on the projects that provide the most business value to the organization.

Now that we have discussed the impact of cloud computing on the IT group, let’s examine how cloud computing effects the five governance factors as defined in the COBIT model.

Value Delivery: Under the pre-cloud provisioning model, most new projects included costs for hardware to support the application and usually for testing and development environments also. IT was also guilty of over-buying hardware to ensure that if there were performance issues they were at least not hardware-related and to provide capacity for peak loads that might never materialize. Cloud computing offers several options that can change the cost model and free up more of the IT budget for innovation and not for under-utilized hardware and associated support. One option would be to provision test and QA instances via the cloud instead of purchasing additional servers or to shift peak loads to the cloud instead of maintaining that capacity internally. Cloud-based tools could also enable rapid prototyping, allowing for quicker delivery of business applications. With the potential cost savings, projects that were cost prohibitive may now be viable or funds freed up to support additional projects. Certainly some of these issues can be addressed using virtualization but cloud gives the IT group another tool in its tool kit to attack business problems. With the right strategy and mix of technologies, the IT group can deliver more value for potentially less money. There is one caveat. In order to ensure that proper value is being delivered, the IT organization needs to have a firm grasp on its internal cost structure as mentioned above in order to correctly drive investments.

Resource Management: One of the challenges in any IT group is appropriately managing the resources as its disposal to provide as much business value as possible. Cloud computing can impact the resources available to IT in a variety of ways. From a personnel standpoint, cloud will require a shift in operational skill sets from a more internally focused system services mentality to a more holistic system viewpoint oriented around delivering business value and not system infrastructure. IT staff will need to have increased knowledge of the value chain in the business to better understand where cloud technologies can fit in and to also recognize where they are not appropriate. IT management should include a plan to deal with the personnel skills changes required and incorporate that into any overall cloud adaptation strategy. Cloud can also impact system resources by requiring additional network bandwidth, monitoring tools, or other items to appropriately manage and maintain this new hybrid environment.

Risk Management: This is one of the most critical areas of governance impacted by cloud computing. Critical questions arise when cloud computing is brought into the existing IT ecosystem. These questions include those oriented to data protection and business continuity such as, impact to existing disaster recovery plans, how backups/restores and data archival policies are effected, and how are any business continuity plans effected. IT management must have a clear understanding of risk related to vendor service levels, strategies for mitigating that risk and how any potential outages would impact the business. IT also must examine security access and potential risks from putting corporate data into the cloud and what the potential impact might be on the business if data is lost or access control is breached. Other risks that need to be addressed revolve around the viability of the vendor, long-term prospects of any particular technology, and the impact to the existing IT infrastructure. All these questions and more must be asked and addressed, particularly as cloud computing is embraced for more critical business applications and IT services.

Performance Measurement: This area looks at the overall achievement of the IT organization. While cloud does not directly impact the purpose of this portion of the governance process, it does modify some aspects of the underlying key performance measures. Performance measurement is directed at providing management with information on how the IT group is performing outside of conventional accounting measures such as project completion, resource usage, service delivery, and user support metrics. While not integral to the adoption of cloud computing, the setting of governance goals and objectives should take into account the impact of using cloud resources. This could include completing projects quicker by provisioning resources via the cloud or using cloud resources to speed prototyping, or higher efficiencies in using funding and personnel resources by leveraging cloud capabilities. IT organizations will need to review and adjust their metrics and measurements and adjust accordingly.

Strategic Alignment: The primary goal of IT governance is to ensure alignment with organizational objective, cloud computing would not have a significant impact on this area of the IT governance process. Regardless of the technical architecture being proposed for a project, the management team needs to maintain the linkage of business goals and IT plans and ensure that IT projects and operations align with the enterprise needs.

Conclusion

Effective governance is a critical process and is key to maximizing the value any organization receives from its investment in IT. To take full advantage of what cloud computing can provide, IT organizations need reevaluate their corporate governance procedures and adapt them as necessary. For those companies willing to invest in the appropriate governance processes, the future looks bright; for those not ready or willing, the future looks cloudy indeed.

About the Author

Bruce Maches is a 32-year IT veteran and has worked or consulted with firms such as IBM, Pfizer, Eli Lilly, SAIC, and Abbott. He can be reached at bmaches@rfittech.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

HPE Extreme Performance Solutions

Object Storage is the Ideal Storage Method for CME Companies

The communications, media, and entertainment (CME) sector is experiencing a massive paradigm shift driven by rising data volumes and the demand for high-performance data analytics. Read more…

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

Here’s What a Neural Net Looks Like On the Inside

February 15, 2017

Ever wonder what the inside of a machine learning model looks like? Today Graphcore released fascinating images that show how the computational graph concept maps to a new graph processor and graph programming framework it’s creating. Read more…

By Alex Woodie

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

February 9, 2017

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This