Visit additional Tabor Communication Publications
June 23, 2009
HAMBURG, Germany, June 23 -- Sun Microsystems, Inc. today announced new products that enhance the Sun Constellation System, including a new InfiniBand (IB) Quad Data Rate (QDR) switch for optimal cluster interconnect performance, new Sun HPC Software Linux Edition 2.0 for faster and simpler cluster deployment, and Sun Grid Engine 6.2 Update 3 for ease of management. Sun is also demonstrating its storage leadership for HPC, with new enhancements to the Lustre file system, which manages data on nearly two-thirds of the Top 50 supercomputers and seven of the Top 10 supercomputers on the just-released Top500 list. For more information on Sun's HPC solutions, please visit: http://www.sun.com/hpc.
New Open Networking, Software and Sun Storage Bolster Power of Sun Constellation System
Sun is announcing new products today that demonstrate continued innovation and expansion of the Sun Constellation System. Designed using Sun's Open Network Systems architecture, the Sun Constellation System is one of the most integrated and balanced HPC system available today. Highlights include:
Sun Constellation System Powers Two of the Top 10 Systems on Top500 List
The Sun Constellation System is expected to power some of the largest HPC systems in the world, with more than two PetaFLOPS of performance already installed or ordered. According to the latest Top500 list released today, Sun increased its overall presence on the list, including two supercomputers in the Top 10 powered by the Sun Constellation System: the Ranger supercomputer at the Texas Advanced Computing Center (TACC) at #8, and the JuRoPa supercomputer at Forschungszentrum Juelich in Germany at #10 -- which is also the most efficient system among the Top 10 supercomputers as measured by the LINPACK benchmark. In addition, nine of the top 10 supercomputers are using Sun Storage technologies, including Lustre and tape storage.
Sun is also announcing new HPC customers today, including:
Toshiba Research Europe recently replaced five racks of obsolete white-box systems with two Sun Blade 6000 chassis filled with Sun Blade X6450 server modules, powered by Intel Xeon six-core processors. The company is also deploying Sun HPC Software with Linux and Sun Grid Engine software. Toshiba Research Europe is using the new Sun solution for advanced fundamental research in the field of wireless telecommunications.
The University of North Carolina at Chapel Hill (UNC-CH) turned to an HPC solution from Sun to speed its biomedical research and image analysis. The UNC-CH grid -- called the Biomedical Analysis and Simulation Supercomputer (BASS) system -- consists of 17 Sun Fire X4600 M2 servers, each with 16 2.8GHz Quad-Core AMD Opteron processor cores. UNC-CH is also deploying a Sun Storage 6140 array, Sun Storage SL500 modular library system, 45 Sun Ultra 40 M2 workstations and Sun Grid Engine software.
Sun at ISC 2009
Sun is previewing a range of HPC technologies at the ISC 2009 show (Sun booth #410), such as "Project M2," a future addition to its Sun Datacenter Switch family that will be up to six times more space-efficient than competitive chassis switch solutions while delivering up to twice the systems bandwidth than competitive DDR chassis switch solutions. Sun is also demonstrating a next-generation, high-end Sun Storage 7000 Unified Storage System, in addition to the upcoming Sun Blade X6240 and Sun Blade X6440 server modules powered by the new Six-Core AMD Opteron processor code-named "Istanbul." The four-socket Sun Blade X6440 server module will deliver up to twice the I/O capacity (142 Gbps) of competing blade and rackmount servers, and up to 12 TeraFLOPS of peak performance when a full set of blades are installed in the Sun Blade 6048 chassis.
Sun Constellation System Sets Records on HPC Benchmarks
The Sun Constellation System delivers top performance on a wide range of compute-intensive, memory-intensive, communication-intensive or I/O-intensive applications, such as weather forecasting (WRF), seismic processing (Reverse Time Migration), molecular modeling (NAMD), and subatomic physics (MILC). The Sun Constellation System was able to achieve scalability efficiencies of nearly 90 percent across these workloads, enabling customers to recognize performance benefits at every level. Moreover, the availability of QDR IB infrastructure offers up to an 80 percent performance boost versus DDR IB on communication-intensive suites in HPCC, such as FFT and PTRANS. Further highlighting the Sun Constellation System's compute capabilities, Sun is announcing four new ground-breaking results on the SPEC CPU2006 benchmark, which is used to gauge a computer's processor, memory architecture and compilers on a variety of real-world compute intensive workloads:
To see all HPC benchmark results on Sun's Open Network Systems, please visit: http://www.sun.com/servers/hpc/benchmarks.jsp
HPC Software Solutions
Sun is also announcing a variety of open source software solutions and upgrades to simplify HPC deployments. HPC software announced today includes:
For more information on the innovative HPC technologies Sun is showcasing at ISC 2009, visit: http://www.sun.com/hpc or the Sun booth (#410) for live demonstrations. Sun's ISC 2009 online press kit can be found at: http://www.sun.com/aboutsun/media/presskits/2009-0623/index.jsp.
About Sun Microsystems
Sun Microsystems (NASDAQ: JAVA) develops the technologies that power the global marketplace. Guided by a singular vision -- "The Network is the Computer" -- Sun drives network participation through shared innovation, community development and open source leadership. Sun can be found in more than 100 countries and on the Web at http://sun.com.
Source: Sun Microsystems
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.