Visit additional Tabor Communication Publications
November 13, 2012
SALT LAKE CITY, Nov. 13 – Scalable Informatics, Inc. (Scalable), innovative provider of tightly coupled high performance storage and computing solutions for HPC, big data, and data-intensive customers, today announced a new member of the JackRabbit product line, designed for tightly coupled very large scale data storage and processing tasks.
“Scalable Informatics is very enthusiastic about ARM-based systems." said Dr. Joseph Landman, CEO of Scalable Informatics. "By leveraging the power-efficient computing platforms based upon the Calxeda EnergyCore product, the new JackRabbit-EC, has set a high bar in computational and storage density, with 240 TeraBytes (TB) raw capacity in a 4U chassis connected to up to 192 processor cores on 12 Energy cards. Additionally, this same unit is capable of leveraging 60x 960GB SSD devices, offering even greater energy savings while simultaneously providing tremendous performance to big data and storage applications. These units work well within our siCluster systems, providing up to 1920 processor cores, and 2.4 PetaBytes (PB) per rack."
"Calxeda is excited to partner with Scalable Informatics, whose expertise in data intensive solutions is well known in the industry", said Karl Freund, VP Marketing of Calxeda. "This is a great example of where ARM-based solutions in the datacenter can deliver dramatically more throughout per watt and per dollar when lashed together with Calxeda's innovative EnergyCore Fabric".
“We are providing 3x the processor core density, and 4x the bandwidth compared to other systems, and doing so at a small fraction of the overall power consumption of other solutions.” said Dr. Landman. “This is paving the way for ultra dense, extremely power-efficient tightly coupled computing and storage, which is the way forward for big data and data-intensive analyses.”
Since their initial development, Scalable's storage systems have consistently demonstrated best of breed performance at aggressive price points. Customers in markets as diverse as Financial Services, Next Generation Sequencing, Medical and Geospatial Imaging, Scientific Computing, Engineering, as well as Media Serving, Animation, and Rendering have relied upon Scalable Informatics's solutions to successfully advance their mission objectives while simultaneously reducing their bottom line.
Scalable is offering these Extreme Density units with 60x 2.5/3.5 inch top mount SATA, SAS, or SSD drives, up to 240 TB of spinning disk, or 57.6 TB of SSD via a top mount raw storage in 4U of rack space. All units are usable in Scalable's siCluster product offering, and as part of cluster file systems such as Ceph.
Founded in January 2008, Calxeda brings new performance density to the datacenter with revolutionary server-on-a-chip technology. Calxeda currently employs 100 professionals in Austin Texas and the Silicon Valley area. Calxeda is funded by a unique syndicate comprising industry leading venture capital firms and semiconductor innovators, including ARM Holdings, Advanced Technology Investment Company, Austin Ventures, Battery Ventures, Flybridge Capital Partners, Highland Capital Partners and Vulcan Capital.
About Scalable Informatics
Scalable Informatics, Inc. is a proven high performance storage and computing solutions company, delivering pragmatic solutions for computation- and data-intensive problems. A privately owned company, Scalable designs, builds, and supports some of the highest-performing storage and computing systems, and cluster/cloud hardware available in market. Scalable Informatics provides support, consulting, and development services as well as on-demand computing and storage capability.
Source: Scalable Informatics
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.