Visit additional Tabor Communication Publications
September 20, 2012
FREMONT, Calif., Sept. 20 — AMAX, a leading innovator of High Performance Computing (HPC), dynamic Enterprise IT, and custom Appliance Manufacturing solutions, has updated its entire storage optimized StorMax(TM) product line with full support for 4TB Enterprise hard disk drives, including 4U storage arrays with a maximum capacity of 240TB per unit, to answer the call of big data and cloud/datacenter markets where storage density, watt-per-gigabyte and cost-per-GB are critical parameters. Hungry for colossal storage, cloud and data storage/analytics requirements are redefining the datacenter by engaging extremely dense, power-efficient servers and storage architectures to help manage explosive petabyte growth.
"The need for additional storage is expanding multi-directionally in magnitude, propelled by the movement of information to the cloud, and the core philosophy of data analytics that all data must be retained for the potential of mining invaluable business intelligence," said James Huang, Product Manager at AMAX. "Right now there is more data being generated than storage available meaning robust storage capability must catch up to demand -- AMAX's answer is the StorMax line, featuring validated HDDs with the highest per drive capacity on the market at 4TB, allowing us to deploy turnkey storage solutions where density, capacity and power efficiency are of the utmost necessity."
The density and performance impact of the 4TB drives is showcased in one of AMAX's most highly-deployed storage offering and cluster building block, the StorMax(TM) J4502 60-drive JBOD, featuring high-speed 6Gb/s SAS 2.0 connections. Offering 33 percent more capacity than 3TB drives, the integration of the 4TB drives allow IT managers to realize an astonishing 2.4 petabytes of storage in the footprint of a standard 42U rack. This increase in capacity per HDD makes it possible to pack more available storage into each system without the need to invest in additional server enclosures, thus maintaining minimal footprint and power consumption.
All AMAX StorMax(TM) models have been updated to support 4TB Enterprise hard disk drives, including:
-- StorMax(TM) J4502(240TB) -- Maximum capacity 4U 60x 3.5" JBOD enclosure -- StorMax(TM) S4450(180TB) -- Ultra-dense 4U 45x 3.5" UP Intel Xeon Storage Server -- StorMax(TM) Xn-42303(144TB) -- High capacity 4U 36x 3.5" DP Intel Xeon Storage Server -- StorMax(TM) Xr-22302(48TB) -- Compact 2U 12x 3.5" DP Intel Xeon Storage Server -- StorMax(TM)-X3(2,400TB) -- Storage without limitation in a 42U rack
As with all AMAX storage servers, the StorMax(TM) series configurations can be tailored to each customer's specific needs as standalone systems or as building blocks for robust clusters, allowing for flexible configurations and to match target budgets. For more information about AMAX products and services, visit www.amax.com.
Founded in 1979, AMAX is a trusted leader in Custom Server and Storage Solutions in North America. Headquartered in Fremont, California, AMAX also operates several branch offices throughout North America and multiple locations in China servicing the APAC region. AMAX's expertise drives two key divisions that deliver customized computing solutions to a wide range of industries: AMAX's Appliance Manufacturing Division provides efficient and top-of-the-line manufacturing solutions and global logistics to OEM customers while AMAX's Enterprise & High Performance Computing Division provides innovative and scalable custom cluster, server, and storage products developed for HPC, Cloud, Virtualization and Big Data applications.
Source: AMAX Technologies
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.