Visit additional Tabor Communication Publications
September 04, 2012
While analytics is a well-established field, it is a constant topic in IT news because to be competitive or even relevant, an analytics application must be able to take advantage of all relevant data to ultimately deliver business value or help the organization better meet its objectives. In many ways, we have seen an evolution from marketing concept and hype to a true IT and business challenge. Analytics appears to have become one of the leading indicators of the impact of “Big Data.” Specifically, an inflection point has been reached with Big Data growth in many organizations where a truly effective Big Data infrastructure is now needed to return commensurate “big value” to the business.
Big Data analytics efforts require high performance computing power, the ability to access and integrate large and varied data sources, and the execution of many discrete analytics routines that make up larger workflows. To bring these elements together requires scalable solutions that support distributed computing on multiple servers, dynamic workload balancing, data integration, high availability, and job prioritization.
As big data growth stresses analytics infrastructure, users, developers and vendors constantly seek to remove bottlenecks; developing new approaches to handling high performance analytics workflows. The most effective, recent approaches are based on parallelism – the ability of both the compute and storage elements in the infrastructure to parallelize work and execute concurrently to accelerate the analytics workflows.
Leveraging a grid
To put the importance of data analytics and business intelligence into perspective, consider that in today’s age of still tight IT budgets, companies regard these two technologies as enablers that deliver a boost in business and increased productivity, according to a recent industry survey. And as such, these are two areas where companies said they are most likely to make investments this year. Add in Big Data and this is forcing a re-evaluation of IT infrastructures.
Analytics vendors are now addressing Big Data analytics performance issues by relying on grid technology. That’s the case with SAS. It has optimized its analytics applications to take advantage of the benefits of a grid infrastructure to run workflows faster and more reliably. SAS Grid establishes a managed, shared environment to process large volumes of data and to run analytic programs more efficiently. It provides critical capabilities necessary for today's business analytics environments, including workload balancing, job prioritization, high availability and built-in failover, parallel processing and resource assignment, and monitoring.
SAS Grid Manager helps organizations take advantage of these capabilities. In particular, SAS Grid Manager provides a central point for administering policies, programs, and job prioritization. With SAS Grid Manager, IT can gain flexibility and meet service levels by easily reassigning computing resources to meet peak workloads or changing business demands.
Since the technology leverages multiple servers in a grid, the end result is more highly available analytics environment. If a server fails, its jobs can be seamlessly transitioned to another server. This capability lets IT staff perform maintenance on specific servers without interrupting analytics jobs, and allows the introduction of additional computing resources without disruption to the business.
Additionally, the grid approach can help speed the time to results. The reason: Individual jobs can be divided into subtasks that are run in parallel on the best available hardware resources in a grid environment. SAS programs best suited for parallel processing are those with large data sets and long run times, and those with multiple execution of independent tasks running against large data sets. Faster processing of analytical jobs will accelerate decision making across a company.
Grid file system requirements
To realize the benefits of a grid infrastructure, storage solutions that work in conjunction with the analytics applications must have certain characteristics.
In general, a storage solution must complement four operations found in a typical Big Data analytics workflow. There operations are the ingestion of data, the storage of that data, the processing and analysis of the datasets, and the distribution of the results to the appropriate people within an organization. In particular, to work with SAS Grid Manager, the storage solution must be able to handle the workloads that can be generated, scale in both I/O and throughput, and have a shared file system.
For example, the solution must be able to handle large sorts, which are quite commonly found in Big Data analytics operations. With large sorts, the files used are typically larger than system memory and thus cannot be retained in local cache. As a result, large sorting workloads require a file system and storage solution that can deliver high throughput and high I/O.
The storage solution must also provide low latency access to file system metadata. File system metadata includes such information as lists of files in a directory, file attributes such as file permissions or file creation date, and other information about the physical data.
File system metadata is updated whenever a file is created, modified, deleted or extended, when a lock is obtained or dropped, and on some file systems, when a file is accessed. And shared file systems make all information about file system metadata and file locking available to all systems. Various shared file systems take different approaches to maintenance of the file system metadata. An appropriate solution that complement’s the SAS Grid must offer low latency access to this information.
Other key characteristics to achieve the best performance in Big Data analytics workflows include the ability to separate data and metadata to optimize performance, linear scalability of bandwidth and I/O, efficient file locking, resilience to fragmentation, and no single point of failure.
A storage solution with these characteristics and file system attributes can help consolidate analytics operations within an organization, all while improving performance.
DDN as your technology partner
Traditional storage solutions cannot deal with today’s increased storage performance and scalability requirements when analyzing Big Data. To reduce latency and improve the performance of Big Data analytics, DataDirect Networks (DDN) storage solutions offer massive scalability and the requisite shared file system.
DDN solutions are optimized for I/O and throughput, adaptable to any workload. And they are extremely scalable in capacity and density. Based on its Storage Fusion Architecture (SFA), the DDN SFA product line offers a number of firsts including up to 40 GB/s throughput for reads AND writes, 1.4 Million IOPs (1.7 Million cached), up to 3.6 PB per rack. Furthermore, DDN lets organizations optimize performance versus cost offering SSD, SAS and SATA Intermix drives.
To efficiently deliver the highest levels of performance to support parallelized compute requires parallel IO. DDN’s GRIDScaler File Storage System integrates DDN’s Storage Fusion Architecture, which delivers massive parallel I/O capabilities, and IBM’s GPFS (General Parallel File System) to eliminate performance bottlenecks, simplify deployment and management, and significantly improve TCO.
GRIDScaler outperforms traditional NAS and SAN enterprise storage protocols and makes use of sophisticated file locking engines for shared access and read/writes from many clients in parallel. GRIDScaler has no single point of failure and separates data and metadata to optimize performance based on workloads.
In benchmarks conducted with SAS, DDN GRIDScaler parallel file storage was found to be an excellent choice for SAS GRID. The workloads used during testing consisted of 240 concurrent SAS processes that generated significant amounts of I/O - I/O demands that are typical of enterprise scale SAS applications.
The benchmark test found that DDN GRIDScaler achieved the fastest SAS Grid Computing run times ever recorded, delivering a sustained performance of 2.6 GB/sec with 240 concurrent processes.
Simply put, DDN GRIDScaler complements SAS Grid to help improve the performance of today’s Big Data analytics workflows.
For more about the SAS work with SAS GRID and DDN GRIDScaler see:
SAS® 9.3 in a Distributed (Grid) Environment with DataDirect™ Networks GRIDScaler File Storage System: http://support.sas.com/rnd/scalability/grid/SASGridDDN.pdf
Also see: A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications: http://support.sas.com/rnd/scalability/papers/SurveyofSharedFilepaperApr25final.pdf
For more information about DDN GRIDScaler, visit: http://www.ddn.com/products/gridscaler-file-storage-system.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.