November 21, 2012
Last week at SC12 in Salt Lake Convey pulled the lid off its MX big data-driven architecture designed to shine against graph analytics problems, which were at the heart of the show’s unmistakable data-intensive computing thrust this year. The new MX line is designed to exploit massive degrees of parallelism while efficiently handling hard-to-partition big data applications. Read more…
April 24, 2012
Convey Computer has launched its newest x86-FPGA "hybrid-core" server. Dubbed HC-2, it represents the first major upgrade of the system since the company introduced the HC-1 product back in 2008. The new offering promises much better performance, but with a similar price range as the original system. Read more…
April 2, 2012
Advanced architectures based on reconfigurable computing can reduce application run times from hours to minutes and address problem sizes unattainable with commodity servers. The Convey hybrid-core computer systems combine the ease-of-deployment of a commodity server with the acceleration possible with reconfigurable, application-specific hardware. The resulting acceleration greatly reduces cost of ownership (by reducing many racks of commodity systems to just a few), and fundamentally improves research quality by allowing more accurate, previously impractical approaches. Read more…
October 7, 2011
Convey recently noted that HPC is “no longer just numerically intensive, it’s now data-intensive—with more and different demands on HPC system architectures.” They claim that the “whole new HPC” that is gathered under the banner of data-intensive computing possesses a number of unique characteristics and see unique opportunities for all the of the data, and new memory and co-processor architectures. Read more…
Whether an organization chooses a cloud for general business needs or a highly tailored workload, the spectrum of offerings and configurations can be overwhelming. To help you navigate the various cloud options available today, we're breaking down your options, exploring pros and cons, and sharing ways to keep your options open and your business agile as you execute your cloud strategy.
Researchers in academic labs and commercial R&D groups continue to need more compute capacity, which means leveraging the latest innovations in HPC technologies as well as an assortment of resources to meet the unique needs of different workloads. Increasingly, systems based on Arm processors are stepping into that role, offering low power consumption and strategic advantages for HPC workloads.
From scale-out clusters on commodity hardware, to flash-based storage with data temperature tiering, cloud-based object storage, and even tape, there are a myriad of considerations when architecting the right enterprise storage solution. In this round-table webinar, we examine case studies covering a variety of storage requirements available today. We’ll discuss when and where to use various storage media in accordance with use cases, and we’ll look at security challenges and emerging storage technology coming online.
© HPCwire. All Rights Reserved. A Tabor Communications Publication
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.