Visit additional Tabor Communication Publications
August 31, 2009
BLACKSBURG, Va., Aug. 31 -- Wu Feng, associate professor of computer science and electrical and computer engineering at Virginia Tech, is one of 38 recipients worldwide to receive the NVIDIA Professor Partnership Program Award.
NVIDIA Research explores challenging topics on the frontiers of visual, parallel, and mobile computing.
Feng received the award to pursue his research on accelerating the performance of key biological applications on graphics processing units (GPUs). The award consisted of an unrestricted cash gift and equipment donations for research and teaching.
Feng uses GPUs to conduct research on genetic sequence alignment as well as temporal data mining to assist in reverse-engineering the brain. He offers an experimental course on accelerator-based parallel computing.
Feng's past honors include winning the Southeastern Universities Research Association (SURA) first annual Intellectual Property to Market (IP2M) competition in 2008 with his Ph.D. student Song Huang. They developed software called EcoDaemon that will save data centers millions of dollars in energy costs. Beyond the energy savings, the solution improves the reliability and useful life of a computer in the data center by reducing the core temperature, thus providing an opportunity to significantly lower the cost and environmental impact of data centers and many other computing devices.
Feng is also the co-developer of the Green500 List that serves as a ranking of environmentally friendly, low-energy supercomputers and a complement to the TOP500 List. He was listed on HPCwire's Top People to Watch List in 2004.
Feng directs Virginia Tech's Synergy Laboratory. Previous professional stints include The Ohio State University, Purdue University, University of Illinois at Urbana-Champaign, Orion Multisystems, Vosaic, IBM T.J. Watson Research Center, NASA Ames Research Center, and most recently, Los Alamos National Laboratory.
His research interests encompass usable and accessible high-end computing from the perspective of systems software, middleware, and applications. As such, his research oftentimes bridges multiple disciplines: networking, monitoring and measurement, green computing, and large-scale data mining and bioinformatics, most notably, mpiBLAST. He has over 150 peer-reviewed technical publications, and his work has been featured in media outlets such as The New York Times, CNN, and BBC News.
He received a Bachelor of Science degree in electrical and computer engineering and in music (honors) in 1988 and an Master of Science degree in computer engineering from The Pennsylvania State University in 1990. He earned a Ph.D. in computer science from the University of Illinois at Urbana-Champaign in 1996. He is a senior member of the IEEE.
NVIDIA's Research and university teams are dedicated to building relationships and collaborating with professors at key universities worldwide. Through these partnerships NVIDIA aims to inspire cutting-edge technological innovation through advanced research and to find new ways of enhancing the teaching and learning experience.
About the College of Engineering
The College of Engineering at Virginia Tech is internationally recognized for its excellence in 14 engineering disciplines and computer science. The college's 6,000 undergraduates benefit from an innovative curriculum that provides a "hands-on, minds-on" approach to engineering education, complementing classroom instruction with two unique design-and-build facilities and a strong Cooperative Education Program. With more than 50 research centers and numerous laboratories, the college offers its 2,000 graduate students opportunities in advanced fields of study such as biomedical engineering, state-of-the-art microelectronics, and nanotechnology. Virginia Tech, the most comprehensive university in Virginia, is dedicated to quality, innovation, and results to the commonwealth, the nation, and the world.
Source: Lynn Nystrom, Virginia Tech
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.