Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
June 22, 2007

CSC Flagship — The Cray XT4 Comes On Stream

by Christopher Lazou

At the beginning of April, phase one of the Cray XT4, purchased by the Centre for Scientific Computing (CSC) in Finland, became operational. While the first phase provides 10.6 teraflops, the final Cray XT4 configuration, planned for 2008, is to deliver over 70 teraflops of compute power to CSC's HPC users.

Of course CSC is no stranger to Cray supercomputer systems. In 1989, Finland purchased a Cray X-MP. I remember this, since as Vice President of the Cray User Group at that time, I organized a welcoming wine reception for Olli Serimaa and his colleagues from CSC as newcomers. They were one of nine sites who joined the Cray User Group at its meeting in Trondheim, Norway that fall. Like many other centres, CSC adopted clusters in the late 1990s, but they are now returning to the capability supercomputing fold.

During the last twenty years, a number of parallel applications have been developed in Finland and a strong HPC user community was established. Due to this development, the extreme capability computing resources of the Cray XT4 can be used efficiently and investments for HPC capacity are profitable.

The Cray XT4 will be the new flagship system for the Finnish scientific research community, replacing a five-year-old IBM cluster system that can no longer keep pace with their performance needs, currently doubling its computer usage every 14 months.

As Kimmo Koski, managing director of CSC recently said: “We selected the Cray XT4 supercomputer after an extensive acquisition process that involved surveying 35 different research groups, closely analysing the available technologies and benchmarking competing systems. Our goal was to procure the most powerful system for the funds that we had available. The new Cray supercomputer will provide the capability required by our diverse research groups and bring Finland back to the leading edge in Europe.”

To remind readers, CSC is a modern supercomputing facility with heterogeneous hardware systems providing IT infrastructure consisting of capability computing and capacity clusters, skills and specialist services for a diverse user community in universities, polytechnic colleges, research institutions and companies across Finland. It also collaborates with various research institutions worldwide. The new Cray XT4 system will be used for research requiring capability computing in areas such as physics, chemistry, nano-technology, linguistics, bioscience, applied mathematics and engineering.

To give a flavour of applications they are focusing on, their ten largest projects in terms of CPU-time are as follows:

In nano-science they are studying ion irradiation induced defects in nano-materials, semiconductors, metals; nano-catalysis on metal surfaces; multi-scale modelling of surfaces and surface-reactions; electronic, magnetic, optical and chemical properties for nano-particles. The physicists are studying lattice simulations of relativistic theories and numerical modelling of plasma and fusion physics. The chemists are engaged in computational studies of NMR and EPR parameters; theoretical study of dynamic properties of interacting molecules; new inorganic molecules; and computer modelling of weak chemical interactions.

The computer acquisition project started in 2005, based on the budget proposal of Finland's Ministry of Education and the Council of State. It lasted about one year and was run according to EU procurement rules. It culminated in the purchase of the 70 Teraflop/s Cray XT4 system for capability computing and an HP Opteron cluster with over 2000 processors and Infiniband for capacity computing. “In our opinion we need to provide solutions for Finnish scientists with diverse needs; capability computing to those who need it and cost-efficient capacity computing to others” said Kimmo Koski.

During the procurement exercise, CSC ran an extensive benchmark set with main applications from Finnish scientists. The Cray system performed well in benchmarks and Cray also proposed an attractive solution, which turned out to match CSC's needs best due to various reasons, such as an attractive proposal, timing, collaboration possibilities and Cray's ability to provide professional services for demanding HPC users.

Experience had shown that the scalability of the previous IBM system at CSC had some limitations and highly parallel codes were not running well. The new Cray system has an extremely efficient low-latency communication network in addition to high-performance processors and can provide capability computing, solving scientific problems that had not been possible, previously.

As Steve Scott (of Cray) tells me, the Cray XT4 supercomputer is a massively parallel processing (MPP) system designed to efficiently scale to a peak performance of more than one petaflop. The system is currently equipped with dual-core processors that can easily be upgraded to future native AMD quad-core processors.

Unlike typical cluster architectures, in which many microprocessors share one communications interface, each AMD Opteron processor in the Cray XT4 system is coupled with its own interconnect chip. Providing six links in three dimensions, the unique Cray SeaStar2 chip uses its embedded routing capability to take advantage of HyperTransport technology and significantly accelerate communications among the processors. Go to for more information.

In a highly competitive world, innovation is critical for achieving economic success. Capability supercomputer systems are an essential research and development tool for enterprise and industry. So let us look at illustrations in some of the key research areas the Cray XT4 system is to be used for.

According to the CSC Website and staff I contacted, the new resources will have a major impact on the computational research in Finland. Foremost the nano-scientists (see list above), who are the biggest users of CSC's resources in terms of CPU time, but also other big groups, including environment researchers, chemists, bio-scientists and physicists, will all certainly be able to benefit from the large increase of computing power. Currently, half of the centres of excellence in research, nominated by the Academy of Finland, are CSC customers and use one third of the computing capacity.

One of the most rapidly growing areas of research and product development today is nano-science and technology, which utilizes atom-level scientific understanding to build up new kinds of functional materials and devices. Nano-science thus relies on understanding complicated atomic interactions, and the best way to obtain that is using massive supercomputing capability, according to professor Kai Nordlund from the University of Helsinki. He continues: “The new capability will enable, for instance, studying dynamic processes in entire nano-objects at the quantum level, something which very few research groups can presently do, anywhere in the world.”

Climate system models supply Finnish society with information on climate change. These models describe the atmosphere, oceans and biosphere with all their mutual interactions, making them computationally very demanding. Computational resource requirements increase in line with the higher model resolution, which is necessary for modelling local and short-term weather extremes, says research professor Heikki Järvinen from the Finnish Meteorological Institute (FMI).

Professor Järvinen emphasizes that the new supercomputer capability at CSC will facilitate climate research at FMI and in the universities, to support preparation of national climate policy, and to evaluate human impact on climate.

Looking ahead, CSC users are gearing themselves to tackle some of the current Grand Challenges. A global model for seas: the future of the gulf stream, which is of vital interest for Scandinavia. Connected models of forests and nano-scale aerosols as factors for future climate in Finland. Another area is how cell membranes function, and the development of more efficient drugs against, for example, cancer. Develop new environmentally-friendly pulp bleaching chemicals and new type of solar cells.

Other areas include the study of quantum dots and wells as nano-electronics solutions, and computational modelling of fusion reactors. The accurate quantification of the age and composition of the universe using satellite observations and the development of better, faster, cheaper engineering products by using computational fluid dynamics.

CSC users are already seeing benefits from phase one of the Cray XT4 system. For example, a new parallel scheme implemented in a development version of Gromacs led to breaking the one teraflop sustained performance barrier on this code at CSC for the first time.

On the previous CSC cluster, Gromacs was the fastest molecular dynamics code when run in serial or in parallel with some tens of processors. This was due to highly optimised code, in particular inner force loops were coded in assembly language, utilising the SSE instructions. However, in a modern supercomputer, such as the Cray XT4 equipped with a very fast interconnect (the Cray Seastar2), Gromacs also scales to hundreds of processors.

At a recent workshop, Gromacs achieved sustained performance of 1.1 teraflops using 384 cores. Gromacs throughput computation under these conditions amounts to 48 ns/day. The benchmark system was a box of 108,000 SPC water molecules, and the long-range interactions were dealt with using a reaction-field for electrostatics, with cut-off distance of 1.2 nm.

In some cases, using cut-offs for electrostatics is considered an unsuitable approximation. However, the Particle Mesh Ewald (PME)-scheme for accurately accounting electrostatics overcomes that objection as it also scales to hundreds of processors on the Cray XT4. This was demonstrated with a lipid bi-layer system of 4096 lipids, which together with the water molecules totals 487,424 atoms (four times the benchmark DPPC-system).

Electrostatics were treated with PME using a cut-off of 1.8 nm and 1.0 nm for vdW. When using 1056 cores to perform the simulation, this system achieved 1.15 teraflops, equivalent to 23 ns/day of simulation.

CSC is also running a project called FinHPC to optimise parallel codes. One of the target codes of the project is Elmfire, a charged and polarized particle simulation code. Elmfire can be used for simulating phenomena inside a fusion reactor and was developed by staff from the technical research centre of Finland (VTT) and Helsinki University of Technology (TKK).

The code has been ported to both PC clusters and the new Cray XT4 system at CSC. The original code has been made portable in the FinHPC project by replacing all proprietary numerical libraries with equivalent open-source libraries (GNU Scientific Library and the Portable Extensible Toolkit for Scientific Computation, PETSc).

Researchers can now achieve previously unattainable numerical results on plasma behaviour for higher particle densities.

Parts of the code have been rewritten in order to simulate large systems. For example, the data structures used for storing information about the particles in the simulation were replaced by similar, more compact and efficient data structures (a hash table instead of a large sparse matrix). This has reduced the memory requirements considerably.

Currently, Elmfire scales to hundreds of processors on the Cray XT4, but this is not enough. In future Grand Challenge applications, the code needs to scale to thousands of processors. Further development of this code is in progress.

CSC is also a member of the Distributed European Infrastructure for Supercomputing Applications (DEISA), which started in 2004 with eleven partners.

DEISA is a consortium of leading national supercomputing centres that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope. The purpose of this sixth Framework Programme (FP6) funded research infrastructure is to enable scientific discovery across a broad spectrum of science and technology, by enhancing and reinforcing European capabilities in the area of high performance computing. This becomes possible through a deep integration of existing national high-end platforms, tightly coupled by a dedicated network and supported by innovative system and grid software. The European supercomputing service is built on top of the existing national services. In fact, dedicated network infrastructures and Grid technologies are used to integrate the national supercomputing facilities into a European network.

The DEISA training session was organized at CSC from May 30 to June 1, 2007. Scientists from all European countries and members of industrial organizations involved in high performance computing were invited to attend. The purpose of the training is to enable fast development of user skills and know-how needed for the efficient utilisation of the DEISA infrastructure. The first part of the training gave a global description and introduction to the usage of the DEISA infrastructure. The second part of the training was dedicated to the topic of heterogeneous environments (CrayXT4 at CSC, SGI at LRZ) and optimisation issues.

CSC also hosted the Cray Technical workshop early this month and is hosting the Cray User Group (CUG) in 2008.

At the end of March this year, a University Grant Program supported by Cray and AMD was inaugurated by CSC. This program will give students and young researchers in Finland access to the Cray XT4 at CSC. Grant recipients will be able to leverage the immense computing power provided by the 70 teraflops system to develop new computational methods, software and tools that can be used to solve novel research problems.

“CSC is delighted to join with Cray and AMD in offering these grants to deserving young people at Finnish universities and polytechnic institutes,” said Juha Haataja, director for science support at CSC. “This program offers them a great opportunity to take advantage of one of the most powerful systems in Europe to carry out work that has the potential to push the boundaries of computational science. With the Cray XT4 supercomputer's exceptional speed and scalability, grant recipients will be able to develop and test advanced algorithms, tools and techniques that could not be implemented on less powerful systems.”

The grant selection process will be closely monitored by CSC, which will announce grant winners at a special seminar later this year. Grants of between 5,000 and 25,000 Euros will be awarded based on applications reviewed by the CSC's resource allocation committee, with final selection made by the CSC management group. The organization will support the grant projects with resident computer science experts and other resources.

The program is based on a close three-way partnership among Cray, AMD and CSC and is an excellent opportunity to give researchers early access to future developments within both AMD and Cray. The University Grant Program will help to strengthen computational science in Finland and it will grow the number of potential researchers across all scientific disciplines using high performance computing technology.

CSC is involved at the heart of HPC policy developments in Europe. For example, Kimmo Koski chaired the HPC in Europe Taskforce (HET), which made the following recommendations:

  1. The development and operation of a “top end” infrastructure, by establishing a small number of European HPC facilities to provide extreme computing power (exceeding 1 petaflop capability) for the most demanding computational tasks.

  2. An increased emphasis on the development of the full HPC ecosystem, including the local infrastructure, national and regional facilities, top-level European computing capabilities and the interoperability of their services.
  3. Support for the development of novel software architectures, by starting a range of activities aimed at addressing the key issues in building software that allows exploiting the performance potential of petascale machines in a coherent, efficient, scalable and sustainable manner.

CSC is now involved, together with other European partners, in making proposals for funds to implement the above HET recommendations. For example, Partnership for Advanced Computing in Europe (PACE) is a European FP7 project proposal for the preparatory phase in building the European petaflops computing centres, based on the HPC in Europe Taskforce (HET) work.

Note that CSC is the largest national IT facility in northern Europe. Its supercomputing environment will consist of an over 70 teraflops Cray capability system installed in 2007-2008, a 10.6 teraflops HP capacity cluster and other systems.

It has a staff complement of 150 with a wide variation of competencies in multi-disciplinary computational science, networks, information management and software development. Scientific software development includes in-house products from various projects (modelling, workflows, user interfaces, etc.).

CSC is hosting over 200 commercial scientific applications and over 60 databases, the Finnish University and Research Network (FUNET) and the computing systems of the Finnish scientific libraries.

International collaboration includes: PACE, DEISA, EGEE II, Embrace, HET chair, e-IRG chair, ESFRI-roadmap and other projects.

To put all the above in context, Finland has one of the highest per capita incomes and for a good reason. As the article on European Innovation published in the magazine of the European commission in March 2007 states: “Now in its sixth edition, the European innovation scoreboard paints a picture of how countries perform according to an index of innovation criteria developed under the European commission's trend chart scheme. Topping the 2006 list are Sweden, Finland and Denmark….” Thus, Finland is at the leading edge in European innovation and naturally they wish to remain there in order to sustain the standard of living they currently enjoy.

For more information about CSC go to


Brands and names are the property of their respective owners. Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. June 2007.