Visit additional Tabor Communication Publications
November 18, 2008
Open storage solutions and next-gen Sun Constellation system headline Sun's technology demos at Supercomputing 2008, simplifying and accelerating HPC deployments for customers
AUSTIN, Texas, Nov. 18 -- SC08 -- Sun Microsystems, Inc. today announced new products and technologies that cement its leadership in the HPC storage space, radically simplify and accelerate HPC deployments, and deliver more powerful and dense clusters to more customers. At Supercomputing 2008 in Austin, Sun is showcasing its Open Storage solutions -- including the new Sun Storage 7000 "Amber Road" family and the Lustre parallel file system -- demonstrating Sun's ongoing storage leadership in HPC. Sun is also previewing the next-generation Sun Constellation System -- with double the storage capacity, double the cores and double the compute nodes of the original Sun Constellation System -- in addition to other innovative technologies that will be incorporated into future Sun products. Furthermore, Sun is announcing the Sun Storage Cluster, Sun Compute Cluster and HPC software solutions, which are designed to simplify and accelerate divisional and departmental HPC deployments. For more information on Sun's HPC solutions, visit http://www.sun.com/hpc.
"Sun has been challenging the HPC status quo for more than two decades. The Sun Constellation System reinvented the HPC cluster and is now deployed by customers across the globe," said John Fowler, executive vice president of Systems Platforms Group at Sun Microsystems. "Today we're applying that trademark innovation to new segments of the HPC market with new Open Storage and compute clusters. These solutions are just the beginning -- look for our Open Storage products to radically change the economics of the HPC market."
Today's news follows Sun's announcement last week of the Sun Storage 7000 "Amber Road" family, the world's first Open Storage appliances that offer breakthrough analytic capabilities, significant performance increases, one-quarter of the energy consumption, installation in under five minutes and up to 75 percent cost savings -- all compared to competing storage systems. The Sun Fire X4500 and Sun Fire X4540 storage servers, in addition to other Open Storage solutions, also figure prominently in many of Sun's HPC customer deals.
Sun at Supercomputing 2008
Sun is previewing a range of HPC technologies at the Supercomputing 2008 show (Sun booth #1021), such as the next-generation Sun Constellation System -- with double the storage capacity, double the cores and double the compute nodes of the original Sun Constellation System, the "Genesis" storage array, new "Magnum" switch solutions, the "Glacier" cooling door and storage flash arrays. Sun's newest blade server that will be available by the end of the year -- the Sun Blade X6440 server module powered by the latest Quad-Core AMD Opteron processors code-named "Shanghai" -- posted the best x86 16-thread result on the prominent HPC SPECompM2001 benchmark that is often used to compare the performance of shared memory servers executing compute-intensive scientific applications. In addition, solutions announced today include:
Sun Storage Cluster
The Sun Storage Cluster combines high-performance Sun Fire servers and hybrid data servers with the Lustre file system and a high-speed interconnect to maximize performance, scalability and productivity. The solution will enable customers to scale capacity from 48 terabytes to multiple petabytes, and scale performance I/O rates from 1 GB per second to more than 100 GB per second. In addition, custom configuration and delivery by the Sun Customer Ready Program makes the solution ready to deploy and easy to manage.
Sun Compute Cluster
Ideal for divisional and departmental customers that run compute-intensive applications, such as structural analysis, signal processing, and financial trading, the Sun Compute Cluster is a pre-configured HPC scalable cluster including Sun Fire rackmount servers or Sun Blade servers, pre-loaded open source software, and Infiniband or Ethernet high-bandwidth interconnect. The solution is designed to scale from up to eight racks of compute nodes and will be offered in specific configurations for the mechanical computer aided engineering and financial services industries, in addition to a wide range of configurations for a variety of HPC customers.
HPC Software Solutions
Sun is also announcing a variety of open source software solutions and upgrades to simplify HPC deployments. HPC software announced today includes:
For more information on the innovative HPC technologies Sun is showcasing at Supercomputing 2008, visit: http://www.sun.com/hpc or the Sun booth (#1021) for live demonstrations. Sun's Supercomputing 2008 online press kit can be found at http://www.sun.com/aboutsun/media/presskits/2008-1114/.
About Sun Microsystems, Inc.
Sun Microsystems (NASDAQ: JAVA) develops the technologies that power the global marketplace. Guided by a singular vision -- "The Network is the Computer" -- Sun drives network participation through shared innovation, community development and open source leadership. Sun can be found in more than 100 countries and on the Web at http://sun.com.
Source: Sun Microsystems, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.