Visit additional Tabor Communication Publications
December 13, 2010
Morgan Kaufmann and NVIDIA collaborate to issue first volume of "GPU Computing Gems" in February 2011
BURLINGTON, Mass., Dec. 13 -- Data computation has been called the "third pillar of science," alongside the ancient pillars of logic and observation, and upon which future scientific breakthroughs will rest. Graphics processing units (GPUs) have revolutionized data computation, and are playing a key role in enabling leading researchers and academics to drive the next wave of scientific discovery.
Morgan Kaufmann, a global leader in cutting-edge computing content, has collaborated with NVIDIA Corporation, a leader in GPU computing technologies, to produce a new series of books that will demonstrate how GPUs and advanced parallel computing techniques can be harnessed within different domains to enable new scientific breakthroughs. Each GPU Computing Gems volume will provide practical techniques and real-world examples straight from the leading minds in general purpose GPU research.
Computational scientists increasingly utilize GPUs for computational-intensive applications to achieve dramatic improvements in processing power, efficiency and power consumption. The challenge for developers in the new arena of scientific research is learning how to program systems that effectively use these concurrent processors to achieve these goals, and GPU Computing Gems was created to provide real-world tips and guidance to assist researchers. Each chapter presents techniques used in leading research, designed to be accessible to others in multiple fields and disciplines, allowing knowledge to cross-pollinate across the GPU spectrum.
GPU Computing Gems: Emerald Edition is the first volume in this new series, which will focus on how GPU computing can be applied to:
Editor-in-Chief Wen-mei W. Hwu, the Walter J. ("Jerry") Sanders III-Advanced Micro Devices Endowed Chair in Electrical and Computer Engineering in the Coordinated Science Laboratory of the University of Illinois at Urbana-Champaign, has assembled the leading researchers in parallel programming and gathered their solutions and experiences in one volume under the guidance of expert editors.
The second volume, titled GPU Computing Gems: Jade Edition, will also be edited by Wen-mei W. Hwu, and will gather experts from eight critical GPU computing domains:
The Jade Edition will be published in June 2011. Both titles are part of Morgan Kaufmann's Applications of GPU Computing series. Additional books will follow in 2011 and beyond to help researchers and developers leverage GPUs to improve application speed and efficiency.
About the Editor-in-Chief
Wen-mei W. Hwu is the Walter J. ("Jerry") Sanders III-Advanced Micro Devices Endowed Chair in Electrical and Computer Engineering in the Coordinated Science Laboratory of the University of Illinois at Urbana-Champaign. From 1997 to 1999, Dr. Hwu served as the chairman of the Computer Engineering Program at the University of Illinois. Dr. Hwu received his Ph.D. degree in Computer Science from the University of California, Berkeley. His research interests are in the areas of architecture, implementation, and software for high-performance computer systems. He is the director of the OpenIMPACT project, which has delivered new compiler and computer architecture technologies to the computer industry since 1987. He also serves as the Soft Systems Theme leader of the MARCO/DARPA Gigascale Silicon Research Center (GSRC) and on the Executive Committees of both the GSRC and the MARCO/DARPA Center for Circuit and System Solutions.
Coming in February 2011
GPU Computing Gems: Emerald Edition
Wen-mei W. Hwu, Editor-in-Chief
ISBN: 9780123849885; e-ISBN: 9780123849892
February 14th, 2011 | Hardcover | 900 pp
EUR 53.95/USD 74.95/GBP 45.99
Coming in June 2011
GPU Computing Gems: Jade Edition
Wen-mei W. Hwu, Editor-in-Chief
ISBN: 9780123859631; e-ISBN: 9780123859648
June 21st, 2011 | Hardcover | Pages 900
EUR 53.95/USD 74.95/GBP 45.99
About Morgan Kaufmann
Morgan Kaufmann has been bringing the knowledge of experts to the computing community since 1984. Its goal is to provide timely yet timeless content to research and development professionals, business leaders and IT managers, everyday practitioners, and academia. Morgan Kaufman publishes textbooks and references in Artificial Intelligence, Computer Networking, Computer Architecture, Computer Graphics & Game Development, Data Management & Business Intelligence, Software Engineering, and User Experience & Human Computer Interaction. For more information, visit mkp.com.
Elsevier Science & Technology Books has provided award-winning, leading-edge data and education resources to information professionals worldwide. By delivering world-class solutions both in print and online, Elsevier S&T Books is proud to play an essential role in some of the most distinguished scientific and technology communities in existence today. From economics and public health to microbiology and genetics, Elsevier has a wide variety of books and ebooks online for you to choose from.
NVIDIA (NASDAQ:NVDA) awakened the world to the power of computer graphics when it invented the GPU in 1999. Since then, it has consistently set new standards in visual computing with breathtaking, interactive graphics available on devices ranging from tablets and portable media players to notebooks and workstations. NVIDIA's expertise in programmable GPUs has led to breakthroughs in parallel processing which make supercomputing inexpensive and widely accessible. The company holds more than 1,600 patents worldwide, including ones covering designs and insights that are essential to modern computing. For more information, see www.nvidia.com.
Source: NVIDIA Corp.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.