Visit additional Tabor Communication Publications
November 18, 2008
Sun's open, petascale computing environment integrates high density compute, networking, storage and software to deliver massive scalability and performance; next-generation Sun Constellation System doubles nodes, cores and bandwidth
AUSTIN, Texas, Nov. 18 -- SC08 -- Sun Microsystems, Inc. today announced that Sandia National Laboratories, the Forschungszentrum Jülich and RWTH Aachen University will power their next-generation compute clusters with the Sun Constellation System and Lustre parallel file system, in addition to other Sun systems, storage and software. The extreme scalability and performance of the Sun Constellation System and Lustre will enable Forschungszentrum Jülich and RWTH Aachen University to hit a peak performance of more than 200 teraflops each in the first phase of deployment.
"The Sun Constellation System is a petascale powerhouse and a prime example of the compute, Open Storage and networking innovations Sun is delivering to help customers tackle the most demanding HPC workloads," said John Fowler, executive vice president of the Systems Platforms Group at Sun Microsystems. "First deployed in TACC's Ranger supercomputer, today's announcement proves the Sun Constellation System is the solution of choice for leading-edge HPC customers in health, science, national security and engineering."
Sandia National Laboratories
Sandia National Laboratories has chosen Sun Microsystems' Sun Blade 6048 Modular System, which will include a next-generation Intel Xeon processor (codenamed Nehalem) CPU blade, for their next-generation compute cluster. For data storage, the cluster will use Open Storage products from Sun Microsystems including the Lustre parallel file system and Sun Storage J4400 arrays. This system will provide a foundation for the future scientific and engineering capacity needs of the laboratory as they further their mission in support of our national security.
Forschungszentrum Jülich, Germany's largest HPC center, will deploy a 207-teraflop supercomputer early in 2009 based on Sun Blade servers and Bull NovaScale servers powered by the next-generation Intel Xeon processor, along with a high-performance input-output (I/O) system based on Solaris ZFS and the Lustre file system, guaranteeing end-to-end data integrity. In addition to next-generation blade servers and the Lustre file system, the HPC solution will include next-generation Sun Fire servers and Sun Storage J4400 Arrays. Sun will also install the complete network based on the newest Sun IB Quad Data Rate (QDR) Switches.
The new system will be used for advanced research projects, such as energy management, nanoscience and atmospheric research.
The new Forschungszentrum Jülich supercomputer is part of the "Jülich Research on Petaflops Architectures" or JUROPA project, which was set up by the Forschungszentrum Jülich to investigate emerging cluster technologies and create a new class of cost-efficient supercomputers for petascale computing. Intel, Partec and Sun are contributors to the project, with Bull taking on overall responsibility as prime contractor for the design, delivery and maintenance of the supercomputer. Sun Professional Services will help with installation beginning in 2009, after the next-generation offerings are available.
Forschungszentrum Jülich pursues government-funded research in the fields of health, energy and the environment, and also information technology. With a staff of about 4400, Jülich is one of the largest research centers in Europe.
RWTH Aachen University
After a comprehensive evaluation and tender process, RWTH Aachen University has again chosen an HPC system from Sun Microsystems. Sun plans to install the 200-teraflop cluster in the Aachen University in two installation phases scheduled for completion at the end of 2010. Based on the next-generation Intel Xeon processor and the Sun Constellation System, the new supercomputer will feature state-of-the-art blade technology and SMP-systems, plus Sun-developed QDR Infiniband switches. Unlike products by other manufacturers, the new Sun QDR switches feature superior density, availability of ports and cabling. Data flow between the storage nodes and the supercomputer will be managed by the Lustre parallel file system.
Sun at Supercomputing 2008
Sun is previewing a range of HPC technologies at the Supercomputing 2008 show (Sun booth #1021), such as the next-generation Sun Constellation System -- with double the storage capacity, double the cores and double the compute nodes of the existing Sun Constellation System, the "Genesis" storage array, new "Magnum" switch solutions, the "Glacier" cooling door and storage flash arrays. For more information on the innovative HPC technologies Sun is showcasing at Supercomputing 2008, visit http://www.sun.com/hpc or visit the Sun booth #1021 for live demonstrations. Sun's Supercomputing 2008 online press kit can be found at http://www.sun.com/aboutsun/media/presskits/2008-1114/.
About Sun Microsystems, Inc.
Sun Microsystems (NASDAQ:JAVA) develops the technologies that power the global marketplace. Guided by a singular vision -- "The Network Is The Computer" -- Sun drives network participation through shared innovation, community development and open source leadership. Sun can be found in more than 100 countries and on the Web at http://sun.com.
Source: Sun Microsystems, Inc.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.