Visit additional Tabor Communication Publications
December 16, 2005
Alcatel has deployed its 40 Gbit/s solution connecting the data centers of the University of Stuttgart and the University of Karlsruhe in Germany, a distance of over 105 km. Providing a direct link between two supercomputers, this is the fastest data line in Germany. Together, the two centers deliver a computing power of over 20 teraflops for business and educational applications.
The deployment is part of the "Baden-Württemberg Extended LAN" scientific network (BelWü), which is funded by the Baden-Württemberg state and is integrated in the European research association GÉANT. The whole network connects nine universities, 25 technical colleges, eight cooperative academies and other scientific institutions in southwestern Germany.
The new 40 Gbit/s link enables the two supercomputers to bind the computing and data resources across the network into a unified environment for commercial and educational applications where fast networking rates and high computing horsepower are key. For business users, this means performing detailed 3D simulations in real time such as car-crash tests and process simulations. For research institutes and students, this groundbreaking speed per channel enables new ways of sharing and processing huge amounts of information for advanced and complex research programs.
The Alcatel 40 Gbit/s solution available within its WDM systems, including the Alcatel 1626 Light Manager (LM), enables customers to move huge amounts of data between supercomputer clusters in the most efficient way. For this project, Alcatel interconnected multiple 10 Gigabit Ethernet interfaces of the two university supercomputers through a single WDM link exhibiting remarkable low latency and high transparency. Four bidirectional 10 Gigabit Ethernet interfaces are aggregated onto one 40 Gbit/s wavelength that is transported over an optical fiber without repeaters. The new line is designed to support multiple 40 Gbit/s wavelengths to cope with future traffic growth.
"Our users must benefit from easily accessible, highly reliable computing capabilities and this project represents an enormous boost for the Baden-Württemberg area as a technology center", explained Wolfgang Peters of the Ministry of Science, Research and the Arts of Baden Württemberg. "Alcatel's solution guarantees that the BelWü network is in a leading position in Europe and worldwide to successfully carry out international grid computing projects."
"Alcatel's technology is at the forefront of innovation and this project positions us among the first research networks able to achieve 40 Gbit/s data rates over such distances in a real operational environment," said Prof. Horst Hippler, Rector of the University of Karlsruhe. "The deployment has been achieved in a smooth way enabling our existing supercomputer to continue its operation without disruptions."
"The Baden-Württemberg network, and more generally scientific communities, have growing requirements for high-performance computing power, which has become critical for research developments," said Romano Valussi, president of Alcatel's optical networking activities. "Alcatel's 40 Gbit/s technology enables to deploy new and reliable clusters empowering research activities in many different scientific disciplines and industries".
The new supercomputer of the High Performance Computing Center of the University of Stuttgart (HLRS) is equipped with 576 processors and is around 5,000 times as fast as a desktop PC. The system has a peak performance of 12.7 teraflops and a main memory of 9.2 TB. The computer is thus one of the fastest vector systems in Europe - in 6th place in Europe and 27th place worldwide.
A cluster is currently being installed in several stages at the Scientific Supercomputing Center of the University of Karlsruhe (SSCK). At its peak capacity, the system will feature over 1,200 processors with a performance of around 11 teraflops and a main memory of 7 TB. The different architectures of the computers complement one another for a broad range of applications.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.