Visit additional Tabor Communication Publications
November 13, 2012
SALT LAKE CITY, Nov. 13 – Scalable Informatics, Inc. (Scalable), innovative provider of tightly coupled high performance storage and computing solutions for HPC, big data, and data-intensive customers, today announced a new member of the JackRabbit product line, designed for tightly coupled very large scale data storage and processing tasks.
“Scalable Informatics is very enthusiastic about ARM-based systems." said Dr. Joseph Landman, CEO of Scalable Informatics. "By leveraging the power-efficient computing platforms based upon the Calxeda EnergyCore product, the new JackRabbit-EC, has set a high bar in computational and storage density, with 240 TeraBytes (TB) raw capacity in a 4U chassis connected to up to 192 processor cores on 12 Energy cards. Additionally, this same unit is capable of leveraging 60x 960GB SSD devices, offering even greater energy savings while simultaneously providing tremendous performance to big data and storage applications. These units work well within our siCluster systems, providing up to 1920 processor cores, and 2.4 PetaBytes (PB) per rack."
"Calxeda is excited to partner with Scalable Informatics, whose expertise in data intensive solutions is well known in the industry", said Karl Freund, VP Marketing of Calxeda. "This is a great example of where ARM-based solutions in the datacenter can deliver dramatically more throughout per watt and per dollar when lashed together with Calxeda's innovative EnergyCore Fabric".
“We are providing 3x the processor core density, and 4x the bandwidth compared to other systems, and doing so at a small fraction of the overall power consumption of other solutions.” said Dr. Landman. “This is paving the way for ultra dense, extremely power-efficient tightly coupled computing and storage, which is the way forward for big data and data-intensive analyses.”
Since their initial development, Scalable's storage systems have consistently demonstrated best of breed performance at aggressive price points. Customers in markets as diverse as Financial Services, Next Generation Sequencing, Medical and Geospatial Imaging, Scientific Computing, Engineering, as well as Media Serving, Animation, and Rendering have relied upon Scalable Informatics's solutions to successfully advance their mission objectives while simultaneously reducing their bottom line.
Scalable is offering these Extreme Density units with 60x 2.5/3.5 inch top mount SATA, SAS, or SSD drives, up to 240 TB of spinning disk, or 57.6 TB of SSD via a top mount raw storage in 4U of rack space. All units are usable in Scalable's siCluster product offering, and as part of cluster file systems such as Ceph.
Founded in January 2008, Calxeda brings new performance density to the datacenter with revolutionary server-on-a-chip technology. Calxeda currently employs 100 professionals in Austin Texas and the Silicon Valley area. Calxeda is funded by a unique syndicate comprising industry leading venture capital firms and semiconductor innovators, including ARM Holdings, Advanced Technology Investment Company, Austin Ventures, Battery Ventures, Flybridge Capital Partners, Highland Capital Partners and Vulcan Capital.
About Scalable Informatics
Scalable Informatics, Inc. is a proven high performance storage and computing solutions company, delivering pragmatic solutions for computation- and data-intensive problems. A privately owned company, Scalable designs, builds, and supports some of the highest-performing storage and computing systems, and cluster/cloud hardware available in market. Scalable Informatics provides support, consulting, and development services as well as on-demand computing and storage capability.
Source: Scalable Informatics
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.