Visit additional Tabor Communication Publications
September 07, 2012
Sept. 6 -- The University of Texas at Arlington is teaming with the U.S. Department of Energy's Brookhaven and Argonne national laboratories to develop a universal version of PanDA, a workload management system built to process huge volumes of data from experiments at the Large Hadron Collider in Geneva, Switzerland.
The new project will bolster science and engineering research that relies on "big data," a priority recently promoted by The White House. The U.S. Department of Energy's Office of Advanced Scientific Computing Research has awarded a combined $1.7 million to Brookhaven and UT Arlington to fund the PanDA work over the next three years.
"PanDA has been an extremely useful piece of software. We could not have found the Higgs without it," said Kaushik De, a physics professor and director of UT Arlington's Center of Excellence for High Energy Physics. "It's been used by thousands of physicists around the world. We thought, 'Wouldn't it be nice if we repackaged it so others could use it too?' So, we proposed generalizing PanDA as a meta-application."
UT Arlington and Brookhaven developed PanDA for use by the ATLAS collaboration, a particle physics experiment at the Large Hadron Collider at CERN. ATLAS includes 3,000 physicists from UT Arlington and more than 170 other institutions, 40 of which are in the U.S. In July, it was one of the groups that made headlines by announcing the discovery of a new particle that scientists said could be the Higgs boson, also known as the "God particle."
The computing hardware associated with ATLAS is located at 100 computing centers around the world that manage more than 50 petabytes, or 50 million gigabytes, of data. PanDA links the computing centers and allows scientists to efficiently analyze the tens of millions of particle collisions taking place at the Large Hadron Collider each day. In addition to De's work developing PanDA, UT Arlington is home to the ATLAS Southwest Tier 2 grid computing center.
"The collaboration between UT Arlington and Brookhaven National Laboratory was established a long time ago and it is a very successful story," said Alexei Klimentov, Physics Applications Software Group Leader at Brookhaven. "We are physicists, and we developed this software to be used by a high-energy physics experiment. But very soon we realized that it could be used in other areas, such as astro-particle physics, biomedicine and others."
As an example of the possible applications, officials at the Alpha-Magnetic Spectrometer, or AMS, are considering adopting PanDA software, De said. AMS is an International Space Station experiment looking for dark matter in the universe.
In March, the Obama administration announced the "Big Data Research and Development Initiative," a $200 million investment in tools to handle huge volumes of digital data needed to spur science and engineering discoveries. At that announcement, White House officials highlighted the Energy Department's Office of Advanced Scientific Computing Research as key to that ongoing effort.
PanDA – which was created with funding from the National Science Foundation and the Department of Energy's Office of High Energy Physics – was cited as an example of successful technology already in place.
Klimentov is the principal investigator on the new computing grants. De and Gergely Zaruba, an associate professor of computer science and engineering at UT Arlington, are co-principal investigators for UT Arlington's portion, which totals $704,488. The grant to Brookhaven will total $997,000 over three years. Other key scientists include: Alexandre Vaniachine, of Argonne National Lab; and Torre Wenaus, Dantog Yu and Sergei Panitkin, all of Brookhaven.
Klimentov said one of the major challenges of the new project would be to extend the grid-based PanDA system to a cloud-computing platform. He and De agree that work on the project would undoubtedly improve the capacity of the current PanDA system as it is used by ATLAS.
The "big data" project is an example of innovation under way at UT Arlington, a comprehensive research institution of nearly 33,500 students in the heart of North Texas. Visit www.uta.edu to learn more.
Source: University of Texas at Arlington
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.