Visit additional Tabor Communication Publications
HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
July 10, 2008
If you haven't heard yet, the world as we know it is about to end. Preparations are being made now. Don't bother getting your affairs in order -- that'll do you no good. To what can we attribute this impending doom? The good folks at CERN, who have engineered the Large Hadron Collider (LHC), which is set to go online this August. According to their latest press release, they're ready to break the seventh seal and open the gates to hell as soon as they get all of their sectors cold enough to simulate the void we call "space."
Forgive my facetiousness, but claims of impending doom always have a way of raising my doubts. I don't remember what I was doing at the turn of the millennium, but it wasn't sitting in a basement with a long-term supply of freeze dried foods with a shotgun in hand to protect myself from the Y2K bug. But maybe this time, I should be concerned.
To get those that aren't aware up to speed, CERN has built what they call "Large Hadron Collider" (LHC). The LHC is a very sophisticated machine -- in fact the most sophisticated machine ever built -- designed to simulate conditions at the time of the "Big Bang." This machine, a particle accelerator, will recreate the conditions of space and then smash electrically-charged particles into each other so that scientists can observe all the cool stuff that happens when you smash particles into one another at cosmic speeds.
It sounds harmless enough -- and if you read the CERN press release, you'll note that it downplays the theoretical dangers that this very expensive experiment poses, but not everyone is convinced. Enter the ironically named LHC Legal Defense Fund, whose purpose is to try and stop this experiment from happening. They argue that the risks of this experiment outweigh the benefits, and that we could be facing such things as the creation of a miniature black hole that doesn't dissipate (as CERN researchers expect), but instead expands at an exponential rate, sucking in the Earth and -- well, you get the picture. Communities have cropped up to discuss the subject and air concerns related to this experiement. Their number one question seems to be "do the benefits of this experiment outweigh the theoretical risks?"
It's actually not a bad question.
"There is no cause for concern," says CERN, citing a newly updated report that has been reviewed by themselves and the 20 member Scientific Policy Committee (SPC), who unanimously concluded that the new particles produced by the LHC will pose no danger.
Safe or not, it appears that science is marching forward (on the presumption, of course, that they've got everything under control). And besides, how bad could being sucked into a black hole really be?
Posted by Isaac Lopez - July 09, 2008 @ 9:00 PM, Pacific Daylight Time
Isaac Lopez is the Marketing Director for Tabor Communications.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.