Air Force, University of Illinois Take Aim at Cloud Challenges

By Nicole Hemsoth

May 9, 2011

When natural disasters strike, oftentimes diverse nations heed the call of the country in peril sending both supplies and tactical support. Although there are pipelines to streamlines these rescue efforts, roadblocks can occur when the country in crisis has unstable relationships with the source of aid.

According to researchers supporting a new effort to improve military networks across borders using cloud computing resources, “staging such an operation would be risky without a cloud infrastructure that has secure properties.” As they note, assuring a successful mission in a possibly hostile environment that benefits from the communications, computations and applications of cloud computing isn’t possible without networks that operate seamlessly and securely and within a framework of trusted practices and standards.

To address complications such as these that routinely emerge in military contexts, the University of Illinois unveiled a new research initiative today aimed at creating a more secure, robust environment for military applications as they traverse government and third-party networks.

Coined the Assured Cloud Computing Center, the new program will be backed by $6 million from the U.S. Air Force Research Laboratory Technology Directorate (AFRL) which will work in tandem with the university and the Air Force Office of Scientific Research (AFOSR).

The center will be located within the University of Illinois Information Trust Institute where a team of dedicated researchers will set to work tackling some of the cloud’s most pressing issues, especially in the context of military applications living in the cloud.

The team’s most significant efforts will be concentrated on the matter of “blue” and “gray” networks and the associated problems of security, confidentiality, data integrity and communications—not to mention the general functionality of the applications that require such data protection-related scrutiny.

Dr. Roy Campbell, the Sohaib and Sara Abbasi Professor in the Department of Computer Science at Illinois provided details about the numerous distinctions between “blue and gray” networks. He stated in a release today that “A computational cloud used in military applications may include both blue and gray networks, where ‘blue’ networks are U.S. military networks, which are considered secure—and gray networks, which are those in private hands or perhaps belong to other nations that are considered unsecure.”

Campbell noted that these distinctions and the concerns they bear are critical considerations for the future of military cloud computing because for some military goals, there will be benefits to coordinating computation across a blend of these two resource types.

To follow up with the announcement of the Assured Cloud Computing Center, we asked Dr. Campbell a few additional questions about the scope of cloud security problems, especially as they relate to military applications and touched on some tangential matters, including how this research will extend to the clouds of the future.

HPCc: Give us a personalized account of the current state of cloud computing security: is it overhyped as a problem–after all, there are also potential breach possibilities with in-house systems for the U.S. military. In other words, what specific security problems are involved with military cloud computing?

Campbell: The current state of cloud computing security is clearly lacking as has been demonstrated recently (say by Sony).  Whether the state of cloud computing is overhyped depends on what are risks and costs of compromise.

The model of a cloud computing environment is evolving quickly. The Air Force must be able to conduct network-centric warfare as well as missions of national importance. Clearly, in many circumstances, assurances in the forms of security and dependability are crucial to the successful outcome of the mission. Now, however, throw into the mix the need for the Air Force to perform international operations using both military and non-miltary IT resources and you have additional complexity. To this end,  Assured Cloud Computing has to be end-to-end and cross-layered. It has to operate over multiple security domains.  Now, when the lives of personnel of the Air Force  and our national interest may depend on the correct functioning of the cloud,the need for assured cloud computing becomes a priority.

HPCc: Why is the Air Force so keen on the clouds? What is the advantage for them to have remote access to applications?

Campbell: The Air Force depends very heavily on surveillance, remote sensing, drones, complex computer controlled weapon systems, and powerful computers capable of complex analysis.  Missions can be viewed as complex flows of information from sensors, through command and control, to actuation.

Speed and availability is of the essence. In conducting international missions, the Air Force may not have a complex network at its disposal.  In many emergency situations and natural disasters, infrastructure can be damaged and communications and operations may need other IT support.

Assured cloud computing gives the Air Force the advantage of being able to get the right resources for a mission from a range of available sources.  It clearly helps to provide the Air Force an edge that will allow them to succeed in their missions.

HPCc: Do you see increasing collaboration across the blue and gray computational networks? In other words, many often assume that military applications are housed exclusively on military networks–is this a hybrid cloud model you see emerging in the future (some mission-critical apps on in-house, blue machines) where the other less mission-critical/security-aware applications are being sent to a third-party provider? Give us a sense of this landscape.

Campbell: When the military is conducting a mission, a successful outcome is paramount. The question becomes what does it take to conduct the mission and what is available to allow that to happen. We have already observed natural disasters that have taken out critical infrastructures that are vital to rescue missions (for example in Japan.)  When that happens, the military needs the ability to use resources.

HPCc: There are a lot of references to “blue” and “gray” networks and issues revolving around cloud security as a loose concept, but let’s get more specific–where do you start tackling some of cloud’s security issues? There are so many layers that are involved so better yet, what is the first/most important item of business for your research team on this security front?

Campbell: There are lots of security solutions to problems but knowing how they apply to a particular system and being able to use them for a specific mission is difficult. I expect we will find problems for which we cannot yet provide a solution and our researchers will have to investigate. Firewalls, IDS systems, encryption technology, access controls are all resources we can use. But the problem is getting a mission completed and what it takes to do it in an assured manner.  

One technology we will definitely be deploying is the modeling and simulating of systems to understand better what are the vulnerabilities and problem. We will also be looking at more appropriate access controls that can be deployed across mixtures of blue and gray networks and how we can monitor systems for better security analysis. I expect quite a lot of our first year in this grant will be collaborating with our Air Force researchers in understanding the complete spectrum of the problems faced by our Air Force and documenting them in terms of what technologies can be sued to solve them.

HPCc: What lessons from this initiative can be passed along to the public eventually–are there some core security or other developments you’re working with that will find their way into public cloud provider arsenals? Explain in other words the “trickle down” effect that you think might happen.

Campbell: We have developed clouds as a means of providing humanity an inexpensive and pervasive means of computation and communication. What we haven’t done yet, and our center hopes to address, is how to provide that computation and communication in a manner that is trustworthy and available…. that is assured for the various missions that humanity might need in the future. This Air Force initiative is an important first step.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This