Visit additional Tabor Communication Publications
November 18, 2008
URBANA, Ill., Nov. 18 -- A human-centric future for consumer computing -- where mobile devices, virtual environments, and anthropomorphic communication interfaces allow humans to seamlessly move between cyber- and physical-spaces -- is possible with the power of multicore parallel computing. A major impediment to this vision is that parallel programming today is remarkably difficult and the domain of a few experts. The Universal Parallel Computing Research Center (UPCRC) at Illinois has released a white paper that outlines its research agenda to bring parallel computing to mainstream consumer applications and make multicore parallel programming synonymous with programming. The paper discusses three primary research themes.
Focus on Disciplined Parallel Programming -- Sequential languages have evolved to support well-structured programming, and provide safety and modularity. Mechanisms for parallel control, synchronization, and communication have not yet undergone a similar evolution. The UPCRC/Illinois takes the optimistic view that parallelism can be tamed for all to use by providing disciplined parallel programming models, supported by sophisticated development and execution environments. The white paper lays out an agenda to bring to parallel programming the analogs of the tenets underlying modern sequential programming.
Multi-Front Attack on Multicore Programming -- The UPCRC/Illinois is taking an integrated broad-based attack on parallelism at all levels of the system stack from applications down to hardware, using every weapon in the arsenal to enable performance, scalability, and programmability. This includes investigating disciplined parallel languages, metaprogramming and autotuners, and domain-specific environments; developing a powerful translation environment to exploit information from multiple sources at different times in the life of a program; developing an adaptive runtime to handle heterogeneity and automate resource management; developing new hardware mechanisms to enhance performance, scalability and programmability; and rethinking the customary division of labor among the layers of the system stack. Refactoring tools will help move existing code to new environments and formal methods-based techniques and tools will help ensure correctness.
Human-Centric Vision of Future Consumer Applications -- Driving the agenda is a human-centric vision of future consumer applications, backed up by research on application technologies to enable quantum-leaps in immersive visual realism, reliable natural-language processing, and robust telepresence. Investigating these applications reveals new parallel patterns and serves as a testbed for evaluating, refining, and ultimately proving UPCRC/Illinois ideas on multicore programming.
The complete paper is available online at www.upcrc.illinois.edu or www.parallel.illinois.edu. Parallel@Illinois will also distribute the paper from their SC08 booth (#2040) Nov. 17-20, 2008. The UPCRC welcomes feedback on the paper. Comments and suggestions can be forwarded to firstname.lastname@example.org or posted online at http://www.upcrc.illinois.edu/whitepaper.html.
About the Universal Parallel Computing Research Center (UPCRC)
The Universal Parallel Computing Research Center at the University of Illinois is a joint research endeavor of the Department of Computer Science, the Coordinated Science Laboratory, the Department of Electrical and Computer Engineering, and corporate partners Microsoft and Intel. The center builds on a history of Illinois innovation in parallel computing that spans four decades. For more information, visit www.upcrc.illinois.edu.
Parallel@Illinois (www.parallel.illinois.edu) is the collective representation of parallel computing research and education at the University of Illinois at Urbana-Champaign. Current efforts include:
-- Universal Parallel Computing Research Center
-- Blue Waters
-- Gigascale Systems Research Center
-- Cloud Computing Testbed
-- CUDA Center of Excellence
-- Institute for Advanced Computing Applications and Technologies
-- OpenSPARC Center of Excellence
About the Department of Computer Science
The Department of Computer Science at the University of Illinois is recognized throughout the world as a leader in computer science education and research and is consistently ranked among the top 5 programs in the nation. The department and its graduates have long been at the forefront of modern computing beginning with ILLIAC in 1952, continuing through the most recent Internet era with YouTube and PayPal. For more information, visit www.cs.uiuc.edu.
About the Department of Electrical and Computer Engineering
The Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign is a world leader in education, research, and scholarship and is consistently ranked in the top five programs for undergraduate and graduate studies. ECE Illinois' 100-plus faculty members are experts in a broad range of areas and its more than 2,000 students are among the brightest of their generation. The department's 20,000 alumni are a driving force in the world of engineering and beyond. For more information, visit www.ece.illinois.edu.
About the Coordinated Science Laboratory
The Coordinated Science Laboratory at the University of Illinois is one of the nation's premier, multidisciplinary research laboratories, focusing on information technology at the crossroads of computing, control and communications. Created by NASA nearly 60 years ago, CSL continues to transform society by developing and deploying new technologies in areas such as defense, medicine, environmental sciences, robotics, life-enhancement for the disabled and aeronautics. For more information, visit www.csl.illinois.edu.
Source: Department of Computer Science at the University of Illinois
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.