Visit additional Tabor Communication Publications
August 14, 2012
Scientists at the University of Texas at Austin have embarked on a research project to understand the unique communications of an exotic rodent. The cloud forests of Costa Rica are home to the singing mouse, which interacts socially through vocalizations. UT researcher Steven Phelps, along with his peers, hopes to reveal the genetic characteristics that explain the species’ use of these “songs.” To complete their research, the group is tapping the computational power of supercomputers at the Texas Advanced Computing Center (TACC). Last week, TACC profiled the project on their website.
The singing mouse is certainly not the only rodent to make vocalizations, but it relies heavily on this ability to communicate over long distances. Two videos below exhibit the mouse’s song, which is a series of chirps not unlike those from a bird. The first video is played back in real time and the second was filmed with a high-speed camera, played pack 70 percent slower.
Phelps’ team believes the cause of this behavior may be related to the FOXP2 gene, which is present in both mouse and human genomes. The gene has special significance because it’s believed to be connected to human speech problems. As a result, Phelps believes studying the singing mice might eventually help researchers understand the origins of these disorders.
"We ask two things, whether there are sequence changes in the DNA that are associated with the elaboration of the song and whether particular elements seem to be interacting with FOXP2 more," said Phelps. "That gives us leads into what role FOXP2 might play into the elaboration of vocalization."
To perform the genomics analysis, the researchers employed two TACC-resident supercomputers, Lonestar and Ranger. Currently ranked number 40 on the TOP500 list, Ranger is an AMD-powered system with 123TB of memory. Not far behind in 67th place is Lonestar, running Xeon X5680 CPUs and 44TB of memory.
The systems were tasked with reading small fragments of overlapping DNA to create an entire sequence. This software has to processes large amounts of genomic data and required high-memory nodes to hold all the information. Using the supercomputers, the team was able to run their applications in two hours, a vast improvement over the three days it takes when executing the same code on a desktop machine.
Full story at TACC
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.