Visit additional Tabor Communication Publications
December 18, 2009
GENEVA, Dec. 18 -- At its 153rd session today, the CERN Council heard that the Large Hadron Collider ended its first full period of operation in style on Wednesday 16 December. Collisions at 2.36TeV recorded since last weekend have set a new world record and brought to a close a successful first run for the world's most powerful particle accelerator. The LHC has now been put into standby mode, and will restart in February 2010 following a short technical stop to prepare for higher energy collisions and the start of the main research programme.
The LHC circulated its first beams of 2009 on Nov. 20, ushering in a remarkably rapid beam-commissioning phase. The first collisions were recorded on Nov. 23, and a world-record beam energy was established on Nov. 30. Following those milestones, a systematic phase of LHC commissioning led to an extended data-taking period to provide data for the experiments. Over the last two weeks, the six LHC experiments have recorded over a million particle collisions, which have been distributed smoothly for analysis around the world on the LHC computing grid.
"Council is extremely pleased and impressed by the way the LHC, the experiments and the computing Grid have operated this year," said President of Council Torsten Åkesson. "The laboratory set itself an ambitious but realistic programme at its February planning meeting. The fact that all the objectives set back then have been achieved is a ringing endorsement of the step-by-step approach adopted by the CERN management."
A technical stop is needed to prepare the LHC for higher energy running in 2010. Before the 2009 running period began, all the necessary preparations to run up to a collision energy of 2.36 TeV had been carried out. To run at higher energy requires higher electrical currents in the LHC magnet circuits. This places more exacting demands on the new machine protection systems, which need to be readied for the task. Commissioning work for higher energies will be carried out in January, along with necessary adaptations to the hardware and software of the protections systems that have come to light during the 2009 run. Taking advantage of the stop, the CMS experiment will upgrade part of its water cooling system.
"So far, it is all systems go for the LHC," said CERN Director General Rolf Heuer. "This first running period has served its purpose fully: testing all the LHC's systems, providing calibration data for the experiments and showing what needs to be done to prepare the machine for a sustained period of running at higher energy. We could not have asked for a better way to bring 2009 to a close."
Among other Council business was the question of geographic enlargement of CERN. Council heard from a working group established in 2008 to examine this question, and accepted a series of guiding principles concerning the geographic enlargement of CERN, with a possible associate status involving balanced benefits and obligations being developed. In parallel, CERN has received five applications for membership over the past 12 months. Council decided to establish a working group to undertake the tasks of technical verification and fact-finding relating to these applications.
This was the last Council meeting to be chaired by Professor Åkesson, who hands over the Council's Presidency to Professor Michel Spiro, Director of the French National institute of nuclear and particle physics (CNRS/IN2P3).
"It has been a privilege to preside over the CERN Council during this crucial phase in the history of CERN and of particle physics," said Professor Åkesson, "and I am very pleased to be handing over to my friend and colleague Michel Spiro on such a high note."
"I am greatly honoured to have been elected President of the CERN Council," said Professor Spiro. "I will be the Council's 20th President, and it is with humility that I take up the mantle of my illustrious predecessors, not least Professor Åkesson, who has made significant progress with the Organization over the term of his mandate. With the first results from the LHC eagerly anticipated, the period ahead promises to be a golden era: it is these results that will shape the future of particle physics and of CERN."
Full details of the 153rd meeting of the CERN Council are available on the CERN Council Web site at cern.ch/council/.
CERN, the European Organization for Nuclear Research, is the world's leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. India, Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer status.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.