Visit additional Tabor Communication Publications
December 03, 2012
LEXINGTON, Ky. Dec. 3 — The University of Kentucky commemorated 25 years of academic supercomputing with the announcement of the most powerful supercomputer in the university's history and the award of a $1 million "cyber infrastructure" grant from the National Science Foundation. The announcement was part of a cyber infrastructure symposium today, sponsored by UK Information Technology.
With the most recent upgrade, UK deployed a new, $2.6 million, high-performance computing cluster in partnership with Dell Inc. This cluster is more than three times as fast as the one it replaced, with a theoretical maximum of just over 140 teraflops (140 trillion mathematical calculations per second). The cluster contains nearly 5,000 central processing units and 48 high-performance graphics processing units.
"We live in an increasingly hyper-connected, technology-infused global community where our competitive edge is predicated upon the utilization of cutting-edge resources and effective deployment of our intellectual capital," said UK President Eli Capilouto. "Our new supercomputer and cyber infrastructure position us to recruit and retain world-class research scientists who can connect with colleagues across the globe and attract competitive funding to support our growing enterprise as a nationally ranked public research university."
In addition to this investment, the university has received a $1 million competitive "cyber infrastructure" grant from the National Science Foundation, to advance research through software-defined networking.
Software-defined networking revolutionizes the way very large data sets are shared, by allowing researchers and their applications more direct, dynamic control over the flow of data between technical resources and collaborators. This enables better utilization of remote computing resources while improving data integrity/security, and serves overall to expedite research.
"We are competing with some of the top institutions across the country," said Vince Kellen, UK's senior vice provost for academic planning, analytics and technologies, and its chief information officer. "This new high-performance cluster, combined with the cyber infrastructure grant, will really help to keep us in the top group of 20 or 30 universities."
Speakers at the symposium — including Capilouto, Kellen, Computer Science Professor James Griffioen and several researchers who utilize UK's cyber infrastructure in their work — will review UK's advancements in computing over the past quarter-century, explore how cyber infrastructure enhances current research, and offer a glimpse into the future of research computing.
Researchers at UK use cyber infrastructure in a diversity of disciplines, including drug design, materials genome, land use management, nanoscale materials, and the biochemistry of renewable fuels.
Tom Mueller, professor of plant and soil sciences, uses the supercomputer in his research, which is focused on developing techniques to identify where eroded waterways are likely to occur across agricultural fields.
"To calculate terrain attributes on my PC for one county could take a week," Mueller said. "On the supercomputer, it takes only minutes. We plan to do these analyses for the entire state of Kentucky."
Gary Ferland, professor of theoretical astrophysics, is investigating how the chemical elements found on Earth were first formed in stellar furnaces billions of years ago. Ferland developed a large computer program, Cloudy, which is today one of the most widely used in theoretical astrophysics.
"We can build a star, the Orion Nebula, the Crab Nebula, a quasar, or conditions like what happened just after the Big Bang — in the computer, carefully simulate the exact physical processes that occur — and predict the light that is produced," Ferland said. "We can then compare this with what we observe with our telescopes and work backwards to understand what happened, out there and then."
Christina Payne, professor of chemical engineering, is looking into the fundamental nature of proteins, specifically enzymes, and how they interact with their environments. Industrially, enzymes have applications in fields such as green energy, where they are used to convert plant material into biofuels.
"Our molecular simulations are incredibly computationally demanding and require access to high-performance computing resources," Payne said. "The recent hardware upgrade greatly expands our ability to search for molecular-level insights that will inform future protein engineering efforts for biotechnology development."
Source: University of Kentucky
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.