Visit additional Tabor Communication Publications
January 16, 2013
SANTA CLARA, Calif., Jan. 16 – AMD today is launching the AMD Open 3.0 platform (formerly codenamed "Roadrunner"), a radical rethinking of the server motherboard designed to the standards developed by the Open Compute Project. AMD Open 3.0 enables substantial gains in computing flexibility, efficiency and operating cost by simplifying motherboard design with a single base product to address multiple enterprise workloads, including high-performance computing, cloud infrastructure and storage. This innovative design is optimized to eliminate features typically over-provisioned in traditional server offerings.
Today's servers are designed with a "one size fits most" approach incorporating many features and capabilities that inefficiently utilize space and power, increasing cost. Mega data centers have engineers developing optimized platforms with the minimum set of components for specific workloads. The result is a tailored solution with the ideal combination of power, space and cost. The AMD Open 3.0 platform is designed to easily enable IT professionals to "right size" the server to meet specific compute requirements. It is currently being evaluated by Fidelity Investments.
"We became involved with the Open Compute Project very early as we saw a pervasive demand for simplified, energy efficient servers," said Suresh Gopalakrishnan, corporate vice president and general manager, Server, AMD. Our goal is to reduce data center power consumption and cost yet increase performance and flexibility – we believe that AMD Open 3.0 achieves this."
"This is a realization of the Open Compute Project's mission of 'hacking conventional computing infrastructure,'" said Frank Frankovsky, Chairman of the Open Compute Foundation and VP of Hardware Design and Supply Chain at Facebook. What's really exciting for me here is the way the Open Compute Project inspired AMD and specific consumers to collaboratively bring our 'vanity-free' design philosophy to a motherboard that suited their exact needs."
AMD Open 3.0, powered by the recently announced AMD Opteron 6300 Series processors, can be installed in all standard 19" rack environments without modification as well as Open Rack environments. The AMD Open 3.0 motherboard is a 16" x 16.5" board designed to fit into 1U, 1.5U, 2U or 3U rack height servers. It features two AMD Opteron™ 6300 Series processors, each with 12 memory sockets (four channels with three DIMMs each), 6 Serial ATA (SATA) connections per board, one dual channel gigabit Ethernet NIC with integrated management, up to four PCI Express expansion slots, a mezzanine connector for custom module solutions, two serial ports and two USB ports. Specific PCI Express card support is dependent on usage case and chassis height.
Pre-production AMD Open 3.0 systems are currently available to select customers. Production systems from Tyan and Quanta Computer are expected to be available through Avnet Electronics Marketing, Penguin Computing and other system integrators before the end of Q1.
"We have eagerly awaited the AMD Open 3.0 platform as it brings the benefits and spirit of the Open Compute Project to a much wider set of customers," says Charles Wuischpard, CEO, Penguin Computing. "As we deliver a new line of Penguin servers based on AMD Open 3.0 and AMD Opteron 6300 processors, our high performance computing, cloud, and enterprise customers can now deploy application specific systems using the same core building blocks that are cost, performance, and energy optimized and perhaps most important, consistent. We think this initiative eliminates unnecessary complexity and provides new levels of supportability and reliability to the modern data center."
Gopalakrishnan will discuss AMD's latest Open Compute development and its impact on the industry at the Open Compute Summit today in a session titled "An Introduction to the Open Compute 3.0 Modular Server" at 3:30 p.m. Static demonstrations of the AMD Open 3.0 will also be shown in AMD's booth B4.
AMD is a semiconductor design innovator leading the next era of vivid digital experiences with its ground-breaking AMD Accelerated Processing Units (APUs) that power a wide range of computing devices. AMD's server computing products are focused on driving industry-leading cloud computing and virtualization environments. AMD's superior graphics technologies are found in a variety of solutions ranging from game consoles, PCs to supercomputers.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.