CSC Flagship — The Cray XT4 Comes On Stream

By Christopher Lazou

June 22, 2007

At the beginning of April, phase one of the Cray XT4, purchased by the Centre for Scientific Computing (CSC) in Finland, became operational. While the first phase provides 10.6 teraflops, the final Cray XT4 configuration, planned for 2008, is to deliver over 70 teraflops of compute power to CSC's HPC users.

Of course CSC is no stranger to Cray supercomputer systems. In 1989, Finland purchased a Cray X-MP. I remember this, since as Vice President of the Cray User Group at that time, I organized a welcoming wine reception for Olli Serimaa and his colleagues from CSC as newcomers. They were one of nine sites who joined the Cray User Group at its meeting in Trondheim, Norway that fall. Like many other centres, CSC adopted clusters in the late 1990s, but they are now returning to the capability supercomputing fold.

During the last twenty years, a number of parallel applications have been developed in Finland and a strong HPC user community was established. Due to this development, the extreme capability computing resources of the Cray XT4 can be used efficiently and investments for HPC capacity are profitable.

The Cray XT4 will be the new flagship system for the Finnish scientific research community, replacing a five-year-old IBM cluster system that can no longer keep pace with their performance needs, currently doubling its computer usage every 14 months.

As Kimmo Koski, managing director of CSC recently said: “We selected the Cray XT4 supercomputer after an extensive acquisition process that involved surveying 35 different research groups, closely analysing the available technologies and benchmarking competing systems. Our goal was to procure the most powerful system for the funds that we had available. The new Cray supercomputer will provide the capability required by our diverse research groups and bring Finland back to the leading edge in Europe.”

To remind readers, CSC is a modern supercomputing facility with heterogeneous hardware systems providing IT infrastructure consisting of capability computing and capacity clusters, skills and specialist services for a diverse user community in universities, polytechnic colleges, research institutions and companies across Finland. It also collaborates with various research institutions worldwide. The new Cray XT4 system will be used for research requiring capability computing in areas such as physics, chemistry, nano-technology, linguistics, bioscience, applied mathematics and engineering.

To give a flavour of applications they are focusing on, their ten largest projects in terms of CPU-time are as follows:

In nano-science they are studying ion irradiation induced defects in nano-materials, semiconductors, metals; nano-catalysis on metal surfaces; multi-scale modelling of surfaces and surface-reactions; electronic, magnetic, optical and chemical properties for nano-particles. The physicists are studying lattice simulations of relativistic theories and numerical modelling of plasma and fusion physics. The chemists are engaged in computational studies of NMR and EPR parameters; theoretical study of dynamic properties of interacting molecules; new inorganic molecules; and computer modelling of weak chemical interactions.

The computer acquisition project started in 2005, based on the budget proposal of Finland's Ministry of Education and the Council of State. It lasted about one year and was run according to EU procurement rules. It culminated in the purchase of the 70 Teraflop/s Cray XT4 system for capability computing and an HP Opteron cluster with over 2000 processors and Infiniband for capacity computing. “In our opinion we need to provide solutions for Finnish scientists with diverse needs; capability computing to those who need it and cost-efficient capacity computing to others” said Kimmo Koski.

During the procurement exercise, CSC ran an extensive benchmark set with main applications from Finnish scientists. The Cray system performed well in benchmarks and Cray also proposed an attractive solution, which turned out to match CSC's needs best due to various reasons, such as an attractive proposal, timing, collaboration possibilities and Cray's ability to provide professional services for demanding HPC users.

Experience had shown that the scalability of the previous IBM system at CSC had some limitations and highly parallel codes were not running well. The new Cray system has an extremely efficient low-latency communication network in addition to high-performance processors and can provide capability computing, solving scientific problems that had not been possible, previously.

As Steve Scott (of Cray) tells me, the Cray XT4 supercomputer is a massively parallel processing (MPP) system designed to efficiently scale to a peak performance of more than one petaflop. The system is currently equipped with dual-core processors that can easily be upgraded to future native AMD quad-core processors.

Unlike typical cluster architectures, in which many microprocessors share one communications interface, each AMD Opteron processor in the Cray XT4 system is coupled with its own interconnect chip. Providing six links in three dimensions, the unique Cray SeaStar2 chip uses its embedded routing capability to take advantage of HyperTransport technology and significantly accelerate communications among the processors. Go to for more information.

In a highly competitive world, innovation is critical for achieving economic success. Capability supercomputer systems are an essential research and development tool for enterprise and industry. So let us look at illustrations in some of the key research areas the Cray XT4 system is to be used for.

According to the CSC Website and staff I contacted, the new resources will have a major impact on the computational research in Finland. Foremost the nano-scientists (see list above), who are the biggest users of CSC's resources in terms of CPU time, but also other big groups, including environment researchers, chemists, bio-scientists and physicists, will all certainly be able to benefit from the large increase of computing power. Currently, half of the centres of excellence in research, nominated by the Academy of Finland, are CSC customers and use one third of the computing capacity.

One of the most rapidly growing areas of research and product development today is nano-science and technology, which utilizes atom-level scientific understanding to build up new kinds of functional materials and devices. Nano-science thus relies on understanding complicated atomic interactions, and the best way to obtain that is using massive supercomputing capability, according to professor Kai Nordlund from the University of Helsinki. He continues: “The new capability will enable, for instance, studying dynamic processes in entire nano-objects at the quantum level, something which very few research groups can presently do, anywhere in the world.”

Climate system models supply Finnish society with information on climate change. These models describe the atmosphere, oceans and biosphere with all their mutual interactions, making them computationally very demanding. Computational resource requirements increase in line with the higher model resolution, which is necessary for modelling local and short-term weather extremes, says research professor Heikki Järvinen from the Finnish Meteorological Institute (FMI).

Professor Järvinen emphasizes that the new supercomputer capability at CSC will facilitate climate research at FMI and in the universities, to support preparation of national climate policy, and to evaluate human impact on climate.

Looking ahead, CSC users are gearing themselves to tackle some of the current Grand Challenges. A global model for seas: the future of the gulf stream, which is of vital interest for Scandinavia. Connected models of forests and nano-scale aerosols as factors for future climate in Finland. Another area is how cell membranes function, and the development of more efficient drugs against, for example, cancer. Develop new environmentally-friendly pulp bleaching chemicals and new type of solar cells.

Other areas include the study of quantum dots and wells as nano-electronics solutions, and computational modelling of fusion reactors. The accurate quantification of the age and composition of the universe using satellite observations and the development of better, faster, cheaper engineering products by using computational fluid dynamics.

CSC users are already seeing benefits from phase one of the Cray XT4 system. For example, a new parallel scheme implemented in a development version of Gromacs led to breaking the one teraflop sustained performance barrier on this code at CSC for the first time.

On the previous CSC cluster, Gromacs was the fastest molecular dynamics code when run in serial or in parallel with some tens of processors. This was due to highly optimised code, in particular inner force loops were coded in assembly language, utilising the SSE instructions. However, in a modern supercomputer, such as the Cray XT4 equipped with a very fast interconnect (the Cray Seastar2), Gromacs also scales to hundreds of processors.

At a recent workshop, Gromacs achieved sustained performance of 1.1 teraflops using 384 cores. Gromacs throughput computation under these conditions amounts to 48 ns/day. The benchmark system was a box of 108,000 SPC water molecules, and the long-range interactions were dealt with using a reaction-field for electrostatics, with cut-off distance of 1.2 nm.

In some cases, using cut-offs for electrostatics is considered an unsuitable approximation. However, the Particle Mesh Ewald (PME)-scheme for accurately accounting electrostatics overcomes that objection as it also scales to hundreds of processors on the Cray XT4. This was demonstrated with a lipid bi-layer system of 4096 lipids, which together with the water molecules totals 487,424 atoms (four times the benchmark DPPC-system).

Electrostatics were treated with PME using a cut-off of 1.8 nm and 1.0 nm for vdW. When using 1056 cores to perform the simulation, this system achieved 1.15 teraflops, equivalent to 23 ns/day of simulation.

CSC is also running a project called FinHPC to optimise parallel codes. One of the target codes of the project is Elmfire, a charged and polarized particle simulation code. Elmfire can be used for simulating phenomena inside a fusion reactor and was developed by staff from the technical research centre of Finland (VTT) and Helsinki University of Technology (TKK).

The code has been ported to both PC clusters and the new Cray XT4 system at CSC. The original code has been made portable in the FinHPC project by replacing all proprietary numerical libraries with equivalent open-source libraries (GNU Scientific Library and the Portable Extensible Toolkit for Scientific Computation, PETSc).

Researchers can now achieve previously unattainable numerical results on plasma behaviour for higher particle densities.

Parts of the code have been rewritten in order to simulate large systems. For example, the data structures used for storing information about the particles in the simulation were replaced by similar, more compact and efficient data structures (a hash table instead of a large sparse matrix). This has reduced the memory requirements considerably.

Currently, Elmfire scales to hundreds of processors on the Cray XT4, but this is not enough. In future Grand Challenge applications, the code needs to scale to thousands of processors. Further development of this code is in progress.

CSC is also a member of the Distributed European Infrastructure for Supercomputing Applications (DEISA), which started in 2004 with eleven partners.

DEISA is a consortium of leading national supercomputing centres that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope. The purpose of this sixth Framework Programme (FP6) funded research infrastructure is to enable scientific discovery across a broad spectrum of science and technology, by enhancing and reinforcing European capabilities in the area of high performance computing. This becomes possible through a deep integration of existing national high-end platforms, tightly coupled by a dedicated network and supported by innovative system and grid software. The European supercomputing service is built on top of the existing national services. In fact, dedicated network infrastructures and Grid technologies are used to integrate the national supercomputing facilities into a European network.

The DEISA training session was organized at CSC from May 30 to June 1, 2007. Scientists from all European countries and members of industrial organizations involved in high performance computing were invited to attend. The purpose of the training is to enable fast development of user skills and know-how needed for the efficient utilisation of the DEISA infrastructure. The first part of the training gave a global description and introduction to the usage of the DEISA infrastructure. The second part of the training was dedicated to the topic of heterogeneous environments (CrayXT4 at CSC, SGI at LRZ) and optimisation issues.

CSC also hosted the Cray Technical workshop early this month and is hosting the Cray User Group (CUG) in 2008.

At the end of March this year, a University Grant Program supported by Cray and AMD was inaugurated by CSC. This program will give students and young researchers in Finland access to the Cray XT4 at CSC. Grant recipients will be able to leverage the immense computing power provided by the 70 teraflops system to develop new computational methods, software and tools that can be used to solve novel research problems.

“CSC is delighted to join with Cray and AMD in offering these grants to deserving young people at Finnish universities and polytechnic institutes,” said Juha Haataja, director for science support at CSC. “This program offers them a great opportunity to take advantage of one of the most powerful systems in Europe to carry out work that has the potential to push the boundaries of computational science. With the Cray XT4 supercomputer's exceptional speed and scalability, grant recipients will be able to develop and test advanced algorithms, tools and techniques that could not be implemented on less powerful systems.”

The grant selection process will be closely monitored by CSC, which will announce grant winners at a special seminar later this year. Grants of between 5,000 and 25,000 Euros will be awarded based on applications reviewed by the CSC's resource allocation committee, with final selection made by the CSC management group. The organization will support the grant projects with resident computer science experts and other resources.

The program is based on a close three-way partnership among Cray, AMD and CSC and is an excellent opportunity to give researchers early access to future developments within both AMD and Cray. The University Grant Program will help to strengthen computational science in Finland and it will grow the number of potential researchers across all scientific disciplines using high performance computing technology.

CSC is involved at the heart of HPC policy developments in Europe. For example, Kimmo Koski chaired the HPC in Europe Taskforce (HET), which made the following recommendations:

  1. The development and operation of a “top end” infrastructure, by establishing a small number of European HPC facilities to provide extreme computing power (exceeding 1 petaflop capability) for the most demanding computational tasks.
  2. An increased emphasis on the development of the full HPC ecosystem, including the local infrastructure, national and regional facilities, top-level European computing capabilities and the interoperability of their services.
  3. Support for the development of novel software architectures, by starting a range of activities aimed at addressing the key issues in building software that allows exploiting the performance potential of petascale machines in a coherent, efficient, scalable and sustainable manner.

CSC is now involved, together with other European partners, in making proposals for funds to implement the above HET recommendations. For example, Partnership for Advanced Computing in Europe (PACE) is a European FP7 project proposal for the preparatory phase in building the European petaflops computing centres, based on the HPC in Europe Taskforce (HET) work.

Note that CSC is the largest national IT facility in northern Europe. Its supercomputing environment will consist of an over 70 teraflops Cray capability system installed in 2007-2008, a 10.6 teraflops HP capacity cluster and other systems.

It has a staff complement of 150 with a wide variation of competencies in multi-disciplinary computational science, networks, information management and software development. Scientific software development includes in-house products from various projects (modelling, workflows, user interfaces, etc.).

CSC is hosting over 200 commercial scientific applications and over 60 databases, the Finnish University and Research Network (FUNET) and the computing systems of the Finnish scientific libraries.

International collaboration includes: PACE, DEISA, EGEE II, Embrace, HET chair, e-IRG chair, ESFRI-roadmap and other projects.

To put all the above in context, Finland has one of the highest per capita incomes and for a good reason. As the article on European Innovation published in the magazine of the European commission in March 2007 states: “Now in its sixth edition, the European innovation scoreboard paints a picture of how countries perform according to an index of innovation criteria developed under the European commission's trend chart scheme. Topping the 2006 list are Sweden, Finland and Denmark….” Thus, Finland is at the leading edge in European innovation and naturally they wish to remain there in order to sustain the standard of living they currently enjoy.

For more information about CSC go to


Brands and names are the property of their respective owners. Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. June 2007.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers


10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This