Visit additional Tabor Communication Publications
May 11, 2010
Osu physicist plays key role in world's largest physics experiment
COLUMBUS, Ohio, May 11 -- Using the world's most powerful particle accelerator and mass data storage sites, such as the Ohio Supercomputer Center, more than 1,000 international physicists, engineers and technicians -- including Ohio State University Professor Thomas J. Humanic -- now have begun the process to unravel questions about the first moments of the universe.
In April, physicists working on the ALICE project (short for A Large Ion Collider Experiment) began recording data from collisions within the Large Hadron Collider, operated by the European Laboratory for Nuclear Research (CERN) near Geneva, Switzerland. The researchers hope to find answers to fundamental questions about the birth of the universe, matter vs. antimatter, the nature of dark matter and maybe even the existence of other dimensions.
The ALICE scientists employ a series of nearly 9,600 powerful magnets to carefully propel and collide opposing beams of protons, and beams of lead nuclei later this year, at nearly the speed of light around a 17-mile underground loop. The proton-proton collisions were conducted at seven tera electron-volts (TeV, a unit of momentum in high energy physics). These are the highest energy proton collisions ever produced in the laboratory -- 3.5 times higher than the previous highest energy proton collisions created at the Tevatron particle collider, located near Chicago at the Department of Energy's Fermilab.
The ALICE collisions expel hundreds to thousands of small particles, including quarks -- which make up the protons and neutrons of the atomic nuclei -- and gluons -- which bind the quarks together. For a fraction of a second, these particles form a fiery-hot plasma that hasn't existed since the first moments after the Big Bang, about 14 billion years ago.
Within the massive 52-foot ALICE detector, 18 sensitive sub-detectors measure the behavior of the expelled particles, recording up to approximately 1.25 gigabytes of data per second -- six times the contents of Encyclopedia Britannica every second.
The massive data sets are now being collected and distributed to researchers around the world through high-speed connections to the LHC Computing Grid (LCG), a network of computer clusters at scientific institutions, including the Ohio Supercomputer Center. The network employs the connectivity of private fiber-optic cable links, as well as existing portions of the public Internet.
The LCG is composed of more than 100,000 processors at 130 organizations across 34 countries and is organized into four levels, or 'tiers.' Tier 0 is CERN's central computer, which distributes data to the eleven Tier 1 sites around the world. The Tier 1 sites, in turn, coordinate and send data to Tier 2 sites, which are centers that provide storage capacity and computational analysis for specific tasks.
Scientists access the stored data through Tier 3 sites – individual computers operated at research facilities.
"Traditionally, researchers would do much, if not all, of their computing at one central computing center. This cannot be done with the ALICE experiments because of the large data volumes," said Humanic. "OSC has been contributing computing resources to the project from the very beginning of ALICE's distributed computing efforts, starting in 2000."
Construction of the LHC began in 1995, when much of the necessary computational and networking technologies didn't yet exist. The long-term plan for the project loosely relied upon a concept referred to as Moore's law, which describes the trend of computer processing power doubling every two years.
"If CERN had brought the LHC online much sooner, computing centers would have had a problem meeting the challenges," said Doug Johnson, a senior systems developer at OSC. "For quite some time, OSC has been moving to meet the needs of a different mode of research, where computers analyze the huge amounts of otherwise raw data collected from instruments, such as satellites, microscopes, sequencing machines and particle colliders."
A prime example of the development of new technologies to meet the demands of the ALICE project is the emergence of grid computing technologies. "OSC was one of the first adopters of the ALICE-developed AliEn (ALICE Environment) grid infrastructure,"
As a Tier-2 site on the LCG, OSC this year has committed, through its normal allocations process, 30 terabytes of data storage and one million processor hours -- equal to about 115 home computers running 24 hours a day for a year, according to Johnson.
"This data will be accessed by Dr. Humanic and his OSU colleagues, as well as researchers anywhere in the world, for downloading, reconstruction and analysis," Johnson said. "Researchers can sit at their laptops, write small programs or macros, submit the programs through the AliEn system, find the necessary ALICE data on AliEn servers and then run their jobs through centers such as OSC."
Beyond serving as a storage and analysis resource for researchers working on the project, "OSC also has been critical in the development and testing of a computing model to analyze the ALICE data," Humanic said. OSC had provided 300,000 CPU hours for data simulations prior to actual experiments at the LHC.
Ohio State is the only institution in the US collaborating on three of the LHC's four major detectors. Beyond ALICE, researchers within the OSU Department of Physics also are collaborating on two other large-scale LHC experiments: the ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) projects. Each of the three projects uses general-purpose detectors to analyze particles caused by collisions produced by the accelerator. The OSU-OSC efforts are jointly funded by the National Science Foundation and the Department of Energy.
About the Ohio Supercomputer Center
The Ohio Supercomputer Center (OSC) is a catalytic partner of Ohio universities and industries, providing a reliable high performance computing and high performance networking infrastructure for a diverse statewide/regional community including education, academic research, industry, and state government. Funded by the Ohio Board of Regents, OSC promotes and stimulates computational research and education in order to act as a key enabler for the state's aspirations in advanced technology, information systems, and advanced industries. For more, visit http://www.osc.edu.
Source: Ohio Supercomputer Center
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.