Visit additional Tabor Communication Publications
April 29, 2008
Within the computing industry, the traditional High Productivity Computing (tHPC) market has acted and continues to act as a generative edge for new technologies and applications. This market area has traditionally been the point where users are pushing advances in system performance and architectures to address problems that range from standard engineering simulations to problems that have hitherto been intractable. In addition, tHPC has acted as a test bed for the IT industry as a whole, where new concepts and technologies are developed and proven and then introduced into larger mainstream markets.
Tabor Research believes that new technologies, methodologies and applications are emerging outside of the traditional HPC market that have the essential characteristics of high productivity computing (requirements for leading edge capabilities, incorporating, testing, and perfecting of new technologies and methodologies, and market creation and expansion.) This new area, which we cleverly call Edge HPC (or eHPC), leverages the experience and technology of the Traditional HPC market, while introducing new areas for innovation. Most importantly we believe that eHPC is at the cusp of significant market generation and growth.
Factors driving the Edge HPC market include:
None of these factors are new or have gone unnoticed. However, they have combined to create new sets of computational/data and visualization requirements that can be addressed by high productivity computing technologies. Tabor Research believes that a significant “Edge HPC” market currently exists and that this market has strong growth potential over the next five plus years.
High Productivity Computing Definition
Tabor Research defines HPC as the use of servers, clusters, supercomputers, and networked systems – plus associated software, tools, components, storage, and services – for tasks that are particularly intensive in computation, analysis, memory usage, or data management. Within industry, HPC can frequently be distinguished from general business computing in that companies generally will use HPC applications to gain advantage in their core endeavors – e.g., finding oil, designing automobile parts, or protecting clients’ investments – as opposed to non-core endeavors, such as payroll management or resource planning.
At the highest level, Tabor Research divides the HPC market into “Traditional HPC” and “Edge HPC” segments, as follows:
o requirements for leading edge systems performance, or ability to address the most demanding problems.
o requirements for ultra or extreme levels of scalability.
o tendency to incorporate, test, and perfect new technologies and methodologies.
o associated with market creation and expansion.
Classifying Edge HPC Applications
The eHPC market represents a diverse set of users with application requirements for high productivity solutions. These requirements can range from relatively straightforward extensions of traditional HPC applications or workflows into new fields to more abstract requirements for system architectural innovations and/or highly specialization systems configurations or infrastructures. Given this diversity of top level requirements, we believe that the market is best segmented based on the physical and/or logical features that define and drive the applications.
Tabor thus divides the Edge HPC market into four major segments: Complex Event and Business Processing, Process Optimization, Virtual Infrastructure and Environments, and Ultra-scale Computing.
Complex Event Processing
Complex event processing (CEP) applications are driven by continuous data feeds generated by real world events such as: electronic trading on stock markets, security monitoring systems, sensor based inventory tracking systems, and so on. Data may be streamed into the system from multiple independent sources, and data may dramatically decrease in value over time. Data volumes can vary significantly from moment to moment. CEP solutions often involve networks of: sensors, multiple communicating servers, and control devices. Applications operate in near real-time, with events initiated from real world occurrence often setting off a chain of response and control events throughout the system network.
CEP applications fall into the eHPC realm when:
Tables 1 provides examples of CEP domains and applications.
Examples of Complex Event Processing Applications
|Civil Infrastructure/Utilities||Delivery network monitoring|
|Computer Systems||Intrusion detection|
|Financial Services||Event alert|
|General Business||Environmental monitoring|
|In store monitoring|
|Real time supply chain|
|Health Informatics||Disease tracking|
|Military Operations||Battlefield monitoring|
|Shared battle space awareness|
|National/Civil Security||Environmental monitoring|
|Telecom||Network traffic routing|
|Transportation||"In-flight" asset tracking|
The Process Optimization (PO) application profiles mirror traditional HPC workflows. These applications make use of technology above and beyond standard enterprise solutions, either in architecture, software, or system management. PO applications have one or more of the following properties:
Table 2 presents a list of example applications and domains that we see as fitting into the Process Optimization segment at this time.
Examples of Process Optimization Applications
|Military Operations||"Sense and Respond" logistics|
|Business Intelligence||Data mining, database search|
|Civil Infrastructure/Utilities||Anomaly management|
|Computer Systems||Anomaly management|
|Financial Services||Capital budgeting|
|General Business||Distribution resource planning|
|Facility location planning|
|Military Operations||Asset tracking|
|Distribution resource planning|
|Facility location planning|
|National/Civil Security||Seismic activity monitoring|
|Weapons and delivery systems planning|
|Transportation||Real time route planning/rerouting|
|Other||Complex text and image matching|
|Text classification and filtering|
Virtual Infrastructure and Environments
Virtual Infrastructure and Environments (VIE) applications implement computer network based business and social structures. They also hold the promise of extending these structures through synthetic realities ala Second Life . These structures range from on-line gaming environments, to multi-person/system training environments, to virtual economies, to virtual social environments. The applications fall into the eHPC market based on:
Table 3 provides a list of example VIE domains and applications.
Examples of Infrastructure and Environments Applications
|Virtual Civil Infrastructure||Internet commerce|
|Consumer products||On-line gaming|
|B to B and B to C||Virtual economies|
|B to B||Virtual offices|
One eHPC feature that appears across multiple application spaces is the requirement for “Ultra-scale Computing capabilities.” Ultra-scale computing systems are specially designed and/or configured to effectively manage node counts that significantly exceed those provided by industry standard products.
Currently ultra-scale applications generally appear as service layers to the internet. The primary example of this application is internet search engines. Applications can be both data intensive (e.g., map and satellite photo applications) and/or compute intensive (e.g., search applications). This segment is currently represented by a small number of very large sites.
Table 4 provides a list of example Ultra-scale domains and applications.
Examples of Ultra-scale Applications
|Internet data processing||Data aggregation|
Tabor Research believes that over the last few years a number of technology and market factors have combined to create new market opportunities outside the boundaries of the traditional HPC market. We believe this “Edge HPC” market is currently generating significant revenues and has strong growth potential. Over time, we expect it to exceed the tHPC market due to the scope of domains it will impact.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.