Visit additional Tabor Communication Publications
December 01, 2006
Mercury Computer Systems Inc. has signed an agreement with Mentor Graphics to jointly develop and deliver a fully integrated electronic design automation (EDA) platform. The platform is based on the Cell Broadband Engine (BE) processor developed originally by IBM, Sony, and Toshiba for the Sony PlayStation 3 video game console and other consumer electronics devices.
"Our partnership with Mercury combines two companies' strengths that bring unprecedented software and hardware innovation to the EDA industry," said Walden Rhines, CEO, Mentor Graphics, Inc. "Increasing complexity across the entire IC design flow puts tremendous pressure on hardware platforms. Mercury's Cell BE compute cluster holds great promise to relieve this pressure while protecting the existing investment in hardware."
Mercury has worked closely with Mentor, a supplier of EDA solutions, to migrate its Calibre OPC product to Mercury's Cell BE processor-based high-performance compute cluster. Using Mercury's experience in optimizing application performance, and the Mercury MultiCore Plus SDK (Software Development Kit), the companies have achieved more than an order of magnitude performance improvement for the new-generation platform.
"Working with innovative market leaders like Mentor Graphics is an important part of Mercury's strategy to expand our market presence," said Jay Bertelli, President and CEO of Mercury Computer Systems. "Mercury's demonstrated leadership in architecting ultimate performance computing solutions that are optimized for specialized applications, together with Mentor's EDA market leadership, is an ideal combination to drive value for our mutual customers."
Hardware acceleration techniques are increasingly implemented in specialized applications that require more processing power than mainstream solutions can offer. The novel platform combines a standard compute cluster with the Cell BE processor to deliver up to a 20x increase in compute capacity and speed now required for resolution enhancement tool (RET) applications at and below 45nm.
The amount of computation required to complete the 45nm and below RET flow has increased dramatically compared to that of the 65nm node. Larger devices, larger optical diameters, more model kernels, through-process simulation, and more compute-intensive process modeling have pushed computation requirements for the 45nm node from 5 to 20 times that of the 65nm node. Even if critical layer jobs can complete on standard farms, the turn-around time (TAT) is unacceptable, and the number of CPUs and licenses are too costly. In addition, companies moving to 45nm want to preserve their current hardware investment.
Mentor turned to Mercury as a partner to research and develop the new architecture. The partnership take advantage of Mentor's expertise in Computational Lithography and Mercury's advanced HPC systems to construct the architecture for next-generation EDA applications.
The Cell processor clusters accelerate the image processing components of Mentor's optical proximity correction tool, Calibre nmOPC, enabling 4x to 10x improvements in run time with little to no increase in general purpose computing requirements over the 65nm node. According to Mentor, this application of the Cell processor to computational lithography will lower the cost of ownership targets for the industry in line with customer requirements for cost mitigation.
"Cell BE's order of magnitude performance advantage for many types of image-based computing can make it a great fit for semiconductor-related applications like Mentor's new dense imaging software technology," said Anthony Yu, IBM's vice president, Semiconductor Industry Sales, Technology Collaboration Solutions. "Cell BE's deployment by an EDA company like Mentor Graphics can put Cell BE at the forefront of enabling advanced semiconductor processing."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.