Cloud Security: The Federated Identity Factor

By Patrick Harding and Gunnar Peterson

November 9, 2010

The Web has experienced remarkable innovation during the last two decades. Web application pioneers have given the world the ability to share more data in more dynamic fashion with greater and greater levels of structure and reliability, yet the digital security mechanisms that protect the data being served have remained remarkably static.  We have finally reached the point where traditional web security can no longer protect our interests, as our corporate data now moves and rests between a web of physical and network locations, many of which are only indirectly controlled and protected by the primary data owner.

How have web applications evolved to de-emphasize security, and why has greater security become critical today?  The answer comes by exploring common practices and comparing them to the best practices that are becoming the heir to throne of web application security: Federated Identity. 

A Brief History of Web Applications

Commercial use of the World Wide Web began in the early 1990’s with the debut of the browser. The browser made the Web accessible to the masses, and businesses began aggressively populating the Web with a wealth of static HyperText Markup Language (HTML) content.  

Recognizing the untapped potential of a worldwide data network, software vendors began to innovate.  By the mid-1990’s, dynamic functionality became available via scripting languages like the Common Gateway Interface (CGI) and Perl. ”Front-end” Web applications accessed data stored on “back-end” servers and mainframes. The security practice of “armoring” servers and connections began here, by building firewalls to protect servers and networks, and creating SSL (Secure Sockets Layer) to protect connections on the wire.

The Web continued to grow in sophistication: Active Server Pages (ASP) and JavaServer Pages (JSP) allowed applications to become substantially more sophisticated.  Purpose-built, transaction-oriented Web application servers emerged next, like Enterprise JavaBeans (EJB) and the Distributed Component Object Model (DCOM), making it easier to integrate data from multiple sources.  The need to structure data became strong and protocols like Simple Object Access Protocol (SOAP) and the eXtensible Markup Language (XML) emerged in 1999. 

From 2001 to present, services evolved as a delivery model that de-emphasized the physical proximity of servers to clients, and instead emphasized loosely coupled interfaces.  Services-Oriented Architecture (SOA) and the Representational State Transfer (REST) architectures both allow interaction between servers, businesses and domains, and combined with advances in latency and performance that accompanied the Web 2.0 movement, the foundation was laid.

These innovations have all helped enable the “cloud.” The concept of a cloud has long been used to depict the Internet, but this cloud is different.  It embodies the ability of an organization to outsource both virtual and physical needs.  Applications that once ran entirely on internal servers are now provided via Software-as-a-Service (SaaS).  Platforms and Infrastructure are now also available as PaaS and IaaS offerings, respectively. 

During all of these advances, one aspect of the Web has remained relatively static:  the layers of security provided by firewalls, and the Secure Socket Layer (SSL).  To be sure, there have been advances in Web security.  Firewalls have become far more sophisticated with Deep Packet Inspection and intrusion detection/prevention capabilities, and SSL has evolved into Transport Layer Security (TLS) with support for the Advanced Encryption Standard.  But are these modest advances sufficient to secure today’s cloud? 

Year
Web Application Software
Web Security Provisions
1995
CGI/Perl
Firewall & SSL
1997
JSP/ASP
Firewall & SSL
1998
EJB/DCOM
Firewall & SSL
1999
SOAP/XML
Firewall & SSL
2001
SOA/REST
Firewall & SSL
2003
Web 2.0
Firewall & SSL
2009
Cloud
???

This table summarizes the tremendous innovation that has taken place in Web application software over the years while relatively little innovation occurred in Web security. 

The Web’s “security status quo” is well understood by those advancing the state-of-the-art in Web applications.  For example, SOAP was designed to be a firewall-friendly protocol.  But as Bruce Schneier, the internationally renowned security technologist, observed, “Calling SOAP a firewall-friendly protocol is like having a skull-friendly bullet.” 

Schneier’s tongue-in-cheek comment highlights a serious problem. While firewalls, NAT and SSL/TLS are necessary for securing the Web, they are no longer sufficient for securing cloud-based applications.  This lack of innovation forces SaaS and other service providers to rely on the so-called “strong password” for security.  “Strong” password may be great in theory, but they can create serious problems in practice. 

The Problem with Passwords

For the sake of discussion here, a “strong” password is defined as one consisting of a combination of numbers and letters (with some capitalized) that does not spell any word or contain any discernable sequence.  How many strong passwords  is a mere mortal expected to memorize, given that writing down or otherwise recording passwords defeats the idea of a shared secret?

The average enterprise employee used 12 UserID/password pairs for accessing the many applications required to perform his or her job (Osterman Research 2009).  It is unreasonable to expect anyone to create, regularly change (also a prudent security practice) and memorize a dozen passwords, but is considered today to be a common practice.  Users are forced to take short-cuts, such as using the same UserID and password for all applications, or writing down their many strong passwords on Post-It notes or, even worse, in a file on their desktop or smartphone. 

Even if most users could memorize several strong passwords, there remains risk to the organization when passwords are used to access services externally (beyond the firewall) where they can be phished, intercepted or otherwise stolen.  The underlying problem with passwords is that they work well only in “small” spaces; that is, in environments that have other means to mitigate risk.  Consider as an analogy the bank vault.  Its combination is the equivalent of a strong password, and is capable of adequately protecting the vault’s contents if, and only if, there are other layers of security at the bank. 

Such other layers of security also exist within the enterprise in the form of locked doors, receptionists, ID badges, security guards, video surveillance, etc.  These layers of security explain why losing a laptop PC in a public place can be a real problem (and why vaults are never located outside of banks!). 

Ideally, these same layers of internal security could also be put to use securing access to external cloud-based applications.  Also ideally, users could then be asked to remember only one strong password (like the bank vault combination), or use just one method of multi-factor authentication.  And ideally, the IT department could administer user access controls for all internal and external applications centrally via a common directory (and no longer be burdened by constant calls to the Help Desk from users trying to recall so many different passwords). 

One innovation in Internet security makes this ideal a practical reality:  federated identity. 

Federated Identity Secures the Cloud

Parsing “federated identity” into its two constituent words reveals the power behind this approach to securing the cloud.  The identity is of an individual user, which is the basis for both authentication (the credentials for establishing the user is who he/she claims to be) and authorization (the applications permitted for use by specific users).  Federation involves a set of standards that allows identity-related information to be shared securely between parties, in this case:  the enterprise and cloud-based service providers. 

The major advantage of federated identity is that it enables the enterprise to maintain full and centralized control over access to all applications, whether internal or external. The IT department also controls how users authenticate, including whatever credentials may be required.  A related advantage is that, with all access control provisions fully centralized, “on-boarding” (adding new employees) and “off-boarding” (terminating employees) become at once more secure and substantially easier to perform. 

Identity-related information is shared between the enterprise and cloud-based providers through security tokens; not the physical kind, but as cryptographically encoded and digitally signed documents (e.g. XML-based SAML tokens) that contain data about a user.  Under this trust model, the good guys have good documents (security tokens) from a trusted source; the bad guys never do.  For this reason, both the enterprise and the service providers are protected. 

To ensure integrity while also affording sufficient flexibility, the security tokens are quite extensive.  For example, the Security Association Markup Language (SAML) standard includes the following elements in its security token:  Issuer (e.g. the enterprise); One-time Use Password; Validity Window (time period when valid); Subject (the user); Context (how the user authenticated); Claims (attributes about the user); and Integrity (digital signature with encryption for confidentiality).  The Claims section is like a “scribble pad” for specifying a wide variety of user attributes that can be used by the application for different purposes such as authorization, personalization or even provisioning a new account.  Indeed, some believe that identity-related Attributes are so significant for Cloud security, that they should become a fourth “A” in AAA systems. 

Two Basic Roles

In the cloud, there are always (at a minimum) two parties.  In fact, “two” serves as the theme for the remainder of this section that explains what federated identity is and how it works. 

The two basic roles are the Identity Provider (IdP) and the Relying Party (RP).  The Identity Provider is the authoritative source of the identity information contained in the security tokens; in this case:  the enterprise.  The Relying Parties (the service providers) establish relationships with one or more Identity Providers and  accept the security tokens containing the assertions needed to govern access control. 

The authoritative nature of and the structured relationship between the two parties is fundamental to federated identity.  Based on the trust established between the Relying Parties and the Identity Providers, the Relying Parties have full confidence in the security tokens issued.  This is not unlike the trust the public places in a driver’s license issued by the Department of Motor Vehicles.

The First and Last Mile

These two distinct IdP and RP roles have led some to refer to the first and last “miles” in federated identity.  The “First Mile” is where the process originates:  at the enterprise as the Identity Provider.  It is in this First Mile where the Authentication Service is integrated with the Security Token Service.  The “Last Mile” is at the receiving end:  at the Relying Party or service provider where the data contained in the security token is integrated with the target application infrastructure (particularly its access control provisions). 

Two Basic Operations

Federated identity has two basic operations:  Issuing and validating the security tokens.  Based on an input or request, the Identity Provider issues a security token.  For example, a UserID/password could generate a cookie, or a Kerberos Ticket could generate a SAML Token.  The Relying Party then validates the security token to ensure it is issued by a trusted authority, properly signed, still in effect (not expired), intended for the right audience, etc. 

Two Methods of Exchange

Security tokens can be exchanged in two different ways:  passive and active.  Passive exchanges are those initiated from a browser, which becomes the “passive” client.  Common mechanisms for passive exchanges include SAML (the protocol) via Browser POST, Redirect or Artifact Binding.  Active exchanges, as the name implies, require the client to play a more active role and can  initiate web service requests.  Normally this done through an Application Programming Interface (API) specified in standards like WS-Trust or OAuth. 

The actual exchange, whether passive or active, is performed using standard protocols.  In addition to the obvious send and receive functions, these protocols can also request a token, request a response, and even transform tokens in various ways.  Examples of such standards include SAML, WS-Federation, WS-Trust, OAuth and OpenID.  With so many options, it is not uncommon for a Security Token Service to support multiple protocols and multiple endpoints, and for a single security token to pass through multiple STS endpoints and be transformed multiple times. 

Two Base Use Cases

The two most common use cases for federated identity are Single Sign-On (SSO) and API Security.  As the name implies, SSO allows users to sign on once (with a strong password or other credentials), then access all authorized applications (internal and external) via a portal or other convenient means of navigation.  Because it is browser-based, SSO generally employs SAML or WS-Federation with passive exchange redirects to the Security Token Service. 

API Security requires an active client or server that directly contacts the STS via Web services.  The popular standards include WS-Trust, OAuth and REST.  As with SSO, the claims asserted in the security token can be used to set up a session and/or provision an account.  Unlike with SSO, the claims can also be used for server-to-server applications, or by a service acting as (or on behalf of) a user. 

In Conclusion

As the popularity of cloud-based applications continues to grow, IT departments will increasingly turn to federated identity as the preferred means for managing access control.  With federated identity, users and the IT staff both benefit from greater convenience and productivity.  Users log in only once, remembering only one strong password, to access all authorized applications.  The IT staff gains full, centralized control over all access privileges for both internal and external applications, and is no longer burdened with constant calls to the Help Desk from users forgetting their passwords. 

The most important aspect of federated identity is not its ease of use, however; it is the enhanced security.  Standards like SAML and WS-Federation were purpose-built to provide robust security in the cloud.  They keep authentication strong and securely within the enterprise firewall.  They eliminate the need to maintain sensitive access control information external to the organization.  They enable successful on- and off-boarding of all employees on a common directory server.  They make it easier to pass security audits by giving full visibility into user access.  They afford the flexibility needed to accommodate special or unusual needs.  And they scale without adding significant cost or increased complexity. 

About the Authors

Patrick Harding, CTO, Ping Identity

Harding brings more than 20 years of experience in software development, networking infrastructure and information security to the role of Chief Technology Officer for Ping Identity. Harding is responsible for Ping Identity’s technology strategy. Previously, Harding was a vice president and security architect at Fidelity Investments where he was responsible for aligning identity management and security technologies with the strategic goals of the business. Harding was integrally involved with the implementation of federated identity technologies at Fidelity — from “napkin” to production. An active leader in the Identity Security space, Harding is a Founding Board Member for the Information Card Foundation, a member of the Cloud Security Alliance Board of Advisors, on the steering committee for OASIS and actively involved in the Kantara Initiative and Project Concordia. He is a regular speaker at RSA, Digital ID World, SaaS Summit, Burton Catalyst and other conferences. Harding holds a BS Degree in Computer Science from the University of New South Wales in Sydney, Australia.

*Arctec Group Managing Principal Gunnar Peterson also contributed to the content of this article. 

To learn more about Identity’s role in Cloud Security, see the Cloud Security Institute’s “Cloud Security:  The Identity Factor” Webinar.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This