Visit additional Tabor Communication Publications
December 10, 2009
Dec. 9 -- University of Illinois computer science professor Bill Gropp and computer science affiliate professor Nitin Vaidya (ECE) have been named IEEE Fellows for the class of 2010. Gropp was selected for his contributions to high performance computing and message passing, and Vaidya was selected for his contributions to wireless networking protocols and mobile communications.
The grade of Fellow recognizes unusual distinction in the profession and is conferred by the IEEE Board of Directors upon those with an extraordinary record of accomplishments in any of the IEEE fields of interest. The accomplishments that are honored have contributed importantly to the advancement or application of engineering, science and technology, bringing the realization of significant value to society.
"This well warranted recognition of two of our faculty members by one of the most prestigious professional societies in our field affirms the consistent record of excellence they have exhibited in their research, as well as the great respect they command among their peers," said Michael Heath, interim head of department and Fulton Watson Copp Chair in computer science.
Professor Gropp's research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. His work investigates methods for combining numerical analysis techniques with parallel processing techniques to form solutions appropriate for execution on modern computing systems. His research also addresses issues such as scalability and hierarchical memory models in parallel computers.
Gropp played a major role in creating the MPI, the standard interprocessor communication interface for large-scale parallel computers. Gropp is also co-author of MPICH, one of the most influential MPI implementations to date, and co-wrote two books on MPI: Using MPI and Using MPI2. He also co-authored the Portable Extensible Toolkit for Scientific Computation (PETSc), one of the leading packages for scientific computing on highly parallel computers.
Among his other accomplishments, Gropp developed adaptive mesh refinement and domain decomposition methods with a focus on scalable parallel algorithms, and discussed these algorithms and their application in Parallel Multilevel Methods for Elliptic Partial Differential Equations.
Gropp serves as co-principal investigator for Blue Waters, a project at the National Center for Supercomputing Applications to build the first sustained-petascale resource for open scientific computing. Gropp also serves as deputy director for research at the Institute for Advanced Computing Applications and Technology at the University of Illinois.
Gropp is a fellow of the ACM, has received the IEEE Computer Society Sidney Fernbach Award honoring innovative uses of high performance computing in problem solving, and was recently named the inaugural HPC Community Leader by insideHPC.com.
Computer science affiliate faculty member and electrical and computer engineering professor Nitin Vaidya's research interests span networking and systems topics, and include as communications networks, wireless networks, and distributed systems.
His work is currently focused on theory and protocols for multi-channel wireless networks, secure multi-hop wireless networks, and rate and power control for wireless networks. His group is also developing a multi-channel, multi-interface wireless mesh testbed known as Net-X.
Net-X provides support for exploiting various forms of diversity available in a wireless network in the form of multiple channels, interfaces, transmission rates/power-levels etc. The goal of the Net-X project is to develop generic OS support for utilizing interface capabilities, such that the support is cleanly integrated into the network stack.
Source: University of Illinois at Urbana-Champaign
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.