Visit additional Tabor Communication Publications
July 13, 2010
New Cluster Compute Instances provide scalable, elastic, cost-efficient AWS cloud resources for advanced HPC workloads
SEATTLE, July 13 -- Amazon Web Services LLC, an Amazon.com company, today announced Cluster Compute Instances for Amazon EC2, a new instance type specifically designed for high-performance computing (HPC) applications and other demanding network-bound applications. Customers with complex computational workloads such as tightly coupled parallel processes, or with applications sensitive to network performance, can now achieve the same high compute and networking performance provided by custom-built infrastructure while benefiting from the elasticity, flexibility and cost advantages of Amazon EC2. To get started using Cluster Compute Instances for Amazon EC2, visit http://aws.amazon.com.
Prior to Cluster Compute Instances for Amazon EC2, organizations with advanced HPC needs have been required to fund expensive, in-house compute clusters by purchasing dedicated, purpose-built hardware. As a result, the demand for high-performance cluster computing often exceeds the capacity of many organizations, and many projects are cut altogether or wait in long queues to access shared resources. With Cluster Compute Instances, businesses and researchers now have access to the high-performance computing capabilities they need -- with pay-as-you-go pricing, the ability to scale on-demand, and no upfront investments.
Cluster Compute Instances provide similar functionality to other Amazon EC2 instances but have been specifically engineered to provide high-performance compute and networking. Cluster Compute Instances provide more CPU than any other Amazon EC2 instance. Customers can also group Cluster Compute Instances into clusters -- allowing applications to get the low-latency network performance required for tightly coupled, node-to-node communication (typical of many HPC applications). Cluster Compute Instances also provide significantly increased network throughput making them well suited for customer applications that need to perform network-intensive operations. Depending on usage patterns, applications can see up to 10 times the network throughput of the largest current Amazon EC2 instance types.
"Businesses and researchers have long been utilizing Amazon EC2 to run highly parallel workloads ranging from genomics sequence analysis and automotive design to financial modeling. At the same time, these customers have told us that many of their largest, most complex workloads required additional network performance," said Peter De Santis, general manager of Amazon EC2. "Cluster Compute Instances provide network latency and bandwidth that previously could only be obtained with expensive, capital intensive, custom-built compute clusters. For perspective, in one of our pre-production tests, an 880 server sub-cluster achieved 41.82 TFlops on a LINPACK test run -- we're very excited that Amazon EC2 customers now have access to this type of HPC performance with the low per-hour pricing, elasticity, and functionality they have come to expect from Amazon EC2."
The National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory is the primary high-performance computing facility supporting scientific research sponsored by the U.S. Department of Energy. "Many of our scientific research areas require high-throughput, low-latency, interconnected systems where applications can quickly communicate with each other, so we were happy to collaborate with Amazon Web Services to test drive our HPC applications on Cluster Compute Instances for Amazon EC2," said Keith Jackson, a computer scientist at the Lawrence Berkeley National Lab. "In our series of comprehensive benchmark tests, we found our HPC applications ran 8.5 times faster on Cluster Compute Instances for Amazon EC2 than the previous EC2 instance types."
MathWorks is a leading developer and supplier of software for technical computing and model-based design. The company now enables its customers, using MATLAB and Parallel Computing Toolbox on their desktops, to scale data-intensive computation problems up to access greater compute power with Cluster Compute Instances for Amazon EC2, which are running MATLAB Distributed Computing Server. "Cluster Compute Instances give MATLAB users the opportunity to test and run their high performance computing problems for data-intensive applications in the cloud at a price and performance level that allows us to continually innovate and meet customer needs," said Silvina Grad-Freilich, senior manager parallel-computing at MathWorks. "We're thrilled to allow our customers to leverage Amazon Web Services as an easily accessible way to meet their needs for increased compute power."
Adaptive Computing provides automation intelligence software, powered by its Moab technology, for HPC, datacenter and cloud environments. Moab is the management layer for more than 50 percent of the resources at the top computing systems in the world. "The availability of Cluster Compute Instances on Amazon EC2 gives organizations access to on-demand and highly available HPC resources," said Michael Jackson, COO and president of Adaptive Computing. "For years we've helped customers build and manage the world's most complex large-scale computing clusters, and now with Cluster Compute Instances, customers can leverage Adaptive Computing's familiar automation software tools to manage HPC resources on Amazon's leading cloud infrastructure."
David Patterson is a world-renowned expert, author and academic who has been recognized with more than 30 awards for research, teaching and service. He is the co-inventor of RAID, RISC and several other computer innovations and has taught computer architecture at University of California, Berkeley, since joining the faculty in 1977. "The high-performance networking of Cluster Compute Instances for Amazon EC2 fills an important need among scientific computing professionals, making the on-demand and scalable cloud environment more viable for technical computing," said Patterson.
Cluster Compute Instances complement other AWS offerings designed to make large-scale computing easier and more cost effective. For example, Public Data Sets on AWS provide a repository of useful public data sets that can be easily accessed from Amazon EC2, allowing fast, cost-effective data analysis by researchers and businesses. These large data sets are hosted on AWS at no charge to the community. Additionally, the Amazon Elastic MapReduce service enables low-friction, cost effective implementation of the Hadoop framework on Amazon EC2. Hadoop is a popular tool for analyzing very large data sets in a highly parallel environment, and Amazon EC2 provides the scale-out environment to run Hadoop clusters of all sizes.
For more information on Amazon EC2 and Cluster Compute Instances, visit http://aws.amazon.com/hpc-applications.
Amazon.com, Inc. (NASDAQ: AMZN), a Fortune 500 company based in Seattle, opened on the World Wide Web in July 1995 and today offers Earth's Biggest Selection. Amazon.com, Inc. seeks to be Earth's most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices. Amazon.com and other sellers offer millions of unique new, refurbished and used items in categories such as Books; Movies, Music & Games; Digital Downloads; Electronics & Computers; Home & Garden; Toys, Kids & Baby; Grocery; Apparel, Shoes & Jewelry; Health & Beauty; Sports & Outdoors; and Tools, Auto & Industrial. Amazon Web Services provides Amazon's developer customers with access to in-the-cloud infrastructure services based on Amazon's own back-end technology platform, which developers can use to enable virtually any type of business. Kindle and Kindle DX are the revolutionary portable readers that wirelessly download books, magazines, newspapers, blogs and personal documents to a crisp, high-resolution electronic ink display that looks and reads like real paper. Kindle and Kindle DX utilize the same 3G wireless technology as advanced cell phones, so users never need to hunt for a Wi-Fi hotspot. Kindle is the #1 bestselling product across the millions of items sold on Amazon.
Source: Amazon.com, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.