Visit additional Tabor Communication Publications
July 23, 2012
Professor Stephen Hawking has launched the most powerful shared-memory supercomputer in Europe. Professor Hawking anticipates that the COSMOS supercomputer, manufactured by SGI and the first system of its kind, will open up new windows on our Universe.
During the launch, which is part of the Numerical Cosmology 2012 workshop at the Centre for Mathematical Sciences at the University of Cambridge, Professor Hawking said: “We have made spectacular advances in cosmology and particle physics recently. Cosmology is now a precision science, so we need machines like COSMOS to reach out and touch the real universe, to investigate whether our mathematical models are correct.”
The COSMOS supercomputer is part of the Science and Technology Facilities Council DiRAC High Performance Computing facility, a national service for UK cosmologists, astronomers and particle physicists, as well as non-academic users.
The Numerical Cosmology 2012 workshop, supported by Intel, has drawn together leaders in computational cosmology with technological innovators. Professor Peter Haynes, head of the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, said: “We are excited that such a diverse group of participants could make it – people who normally do not attend the same conferences – and the aim is to have a genuine cross-fertilisation of emerging applications and technologies”.
Professor Hawking added: “I hope that we will soon find an ultimate theory which, in principle, would enable us to predict everything in the Universe. However, participants at this workshop will be pleased to learn that this will not end our quest for a complete understanding. Even if we do find the ultimate theory, we will still need supercomputers to describe how something as big and complex as the Universe evolves, let alone why humans behave the way they do!”
The COSMOS consortium’s current programme of research aims to advance our understanding of the origin and structure of our universe, primarily through the scientific exploitation of the cosmic microwave sky.
Dr Jeremy Yates, the Project Director for DiRAC, said: “The COSMOS supercomputer is an essential and vital part of the DiRAC Facility. DiRAC now offers five leading systems to UK researchers, two of which are in Cambridge. It allows the UK cosmology and extra-solar planet research communities to take a leading role in understanding how structure was formed in the very early Universe and the composition of the atmospheres of extra-solar planets. These activities will deepen our understanding of the origins of the cosmos and life, and make a vital contribution to the knowledge economy.”
“This flexible shared-memory system will enhance researcher’s capabilities at institutions across the UK and will ensure they remain at the forefront of cosmological research internationally,” concluded Professor Paul Shellard, Director of the Centre for Theoretical Cosmology.
Source: University of Cambridge
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.