Visit additional Tabor Communication Publications
December 10, 2009
Dec. 9 -- Dinosaurs have literally been put through their paces by a new supercomputer, allowing scientists to get closer to understanding how they once moved. The team -- from The University of Manchester, University of Oregon and Yale -- set up the "dinosaur dressage" with the help of Hector, the UK Research Council's supercomputer, currently the 20th fastest supercomputer in the world.
They found that hopping hadrosaurs were fastest but -- for safety reasons -- a two-legged running gait was most likely. In the same way that we can all muster a John Cleese "silly walk," few can sustain it!
In addition the team, funded by National Geographic and The Natural Environment Research Council, has shown how more research can be done to find out how large and fast animals moved, both living and extinct.
In the meantime, Jurassic fanatics can simulate their very own dinosaur as the software (Windows, Mac, Linux) and models are freely available to download from http://www.animalsimulation.org.
Team leader Dr Bill Sellers, whose results are published in Palaeontologica Electronica this week, explains: "Everyone knows that dinosaurs come in all shapes and sizes. Most don't look like anything that's alive today and some are just plain bizarre. One group that fit this description well is the duck-billed dinosaurs, also known as hadrosaurs. Along with the strange appearance – the eponymous duck-bill, peculiar skull ornaments, and long, slender forelimbs -- scientists have argued about how they might have moved. Did they walk on four limbs, two limbs, or a combination of both depending on the speed? It has even been suggested that some may have hopped like a kangaroo!
"Previously we have used computer simulation to calculate the top running speed of a range of two-legged dinosaurs. It turns out that looking at all the possible locomotor options in a single four-legged dinosaur is actually much trickier. Much like a dressage trained horse, the basic body shape is capable of using a whole range of possible gaits even though it is likely that it would much prefer some over others."
His colleague Dr Phil Manning, a palaeontologist at the University of Manchester, says: "Hadrosaurs are a much overlooked group of dinosaurs, often coming second place to the predators, such as their contemporary T. rex. However, in this running race, it seems that the hadrosaurs had the edge on the predators...not surprising if they wanted to survive in a landscape shared by the largest predator on Earth.
"Fortunately for us Hector had just come online and could provide sufficient computational power for the job. We gave the computer simulation a completely free rein to come up with whatever form of locomotion it could. And indeed from a completely random set of starting conditions the model generated a full range of possible gaits: bipedal running and hopping as well as quadrupedal trotting, pacing and galloping.
"The big surprise was that hopping gait came out as fastest at 61 km/h, followed by quadrupedal galloping (58 km/h), and bipedal running (50 km/h)."
Dr Sellers, based at Manchester's Faculty of Life Sciences, adds: "We also looked at how these different ways of moving would apply forces to the skeleton. Here hopping came out the worst: if the hadrosaur had moved like that it would have destroyed its own skeleton.
"In the end bipedal running came out as probably the best compromise between performance and skeletal loading. But the really clear message is that there is still more research to be done, particularly looking at how large and fast animals can move within the margins of safety required by their skeletons."
A copy of the paper "Virtual paleontology: gait reconstruction of extinct vertebrates using high performance computing" is also available at http://palaeo-electronica.org/2009_3/180/index.html.
More information and simulation software and models are available at http://www.animalsimulation.org/index.php?option=com_content&view=article&id=50.
Source: University of Manchester
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.