Visit additional Tabor Communication Publications
May 14, 2012
ARMONK, N.Y., May 14 -- IBM today announced powerful new server solutions aimed at helping x86 clients accelerate the journey to Smarter Computing. Designed to support the requirements of fast-growing businesses, the new products give data center managers the speed, performance and flexibility they need to implement and manage new and existing workloads.
The new products round out one of the industry's most comprehensive portfolios of x86 offerings. This lineup includes an energy-efficient blade server with breakthrough networking flexibility and a compact, affordable rack system that fills the price and performance gap between traditional two-processor servers and four-processor systems for handling high-performance computing and database-intensive applications. IBM also announced several new entry systems for small-to-mid-size infrastructure workloads.
Roland Hagan, vice president and business line executive, IBM System x business said, "Unlike many of our competitors, IBM offers clients a range of solutions from economical infrastructure, to performance optimized solutions, to expert integrated systems to help any size enterprise address its top business challenges."
IBM's newest blade server is the IBM BladeCenter HS23E, an affordable, energy-efficient platform for small to mid-sized organizations, offering breakthrough networking flexibility with built-in support for multiple networking technologies. With up to eight networking ports of Ethernet and other advanced networking protocols, the HS23E can deliver up to 42 percent better compute performance that previous generation servers. In addition, a free IBM FastSetup download automates server setup to reduce hands-on server deployments from days to minutes.
IBM unveiled a new category rack server, the IBM System x3750, a streamlined, entry four-processor offering created for technical computing and other floating-point intensive workloads. The x3750 offers IBM exclusive eXFlash storage technology and innovative "pay-as-you-grow" compute capability to clients who are outgrowing their dual processor systems and need faster database performance, and the ability to manage huge data volumes to gain business insights in seconds rather than hours. IBM has engineered the System x3750 to provide up to 25% more memory performance than comparable systems.
IBM rounded out its new portfolio of entry-to-mid level x86 offerings with new value rack servers that include the storage-dense IBM System x3630 M4, designed for cloud and department-level virtualization, virtual storage, and database workloads, and the IBM System x3530 M4, a new cost-optimized, dense, dual-processor system that is ideal for financial applications, web services, retail point-of-sale, and network infrastructure workloads. These new rack servers extend the IBM portfolio to new customers by lowering the cost of entry for two-processor workloads.
IBM also announced the new IBM Flex System x220, an entry-level compute node that further expands the flexibility and choice of compute options for the recently announced IBM Pure Systems family of expert integrated systems. Performance of the IBM Flex System x220 is tuned for entry virtualization applications and infrastructure workloads such as office email and collaboration. Its advanced management capabilities offer clients real-time system and workload management capabilities out of the box on day one.
For additional information about IBM System x and BladeCenter solutions, visit: http://www.ibm.com/systems/x/news/2012_2Q_announce/index.html
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.