Visit additional Tabor Communication Publications
November 20, 2009
The oft-contended best simple statement is that we need ubiquitous parallelism in the classroom. Once upon a time, it was solely the lunatic fringe, programming esoteric architectures squirreled away in very special corners of the globe that cared about parallelism. In the near future, most electronic devices will have multiple cores which would benefit greatly from parallel programming. The low hanging fruit is, of course, the student's laptop, and aiding the student to make full use of that laptop.
So how do we get there?
Our perception of next steps comes from close to a decade of collaboration pushing parallel and distributed computing education. This doesn't mean we are right, just that we have been walking the walk. Three of the four of us are computer scientists and Dave, our physicist, is essentially also one (of course he claims that we're all physicists). The bulk of our time together, outside of our respective day jobs teaching, is spent leading week-long workshops for faculty -- largely focused on the teaching of parallel and distributed programming and computational thinking. Our assertion is this: As computer architectures evolve from single core to multicore to manycore, the computer science curriculum must experience a commensurate single-course to multi-course to many-course evolution in terms of where parallelism is studied.
Thus, you're probably not surprised we're saying faculty education is the key way to get from here to there, using as many modes of conveyance as possible. For teaching parallelism in our courses, few of us CS educators have learned what we have needed from our own formal education. We possess a self-taught science/art crafted via the hands-on hard-knock cycles of design, debugging, and despair which provided us with rich learning opportunities. This highlights the goals we have for our students: theory tightly coupled with the pragmatic skills of the practiced practitioner, learned via the cycles of design, debugging, and despair. Note that performance programming is wonderfully resurfacing in importance, for if you don't need performance, why bother with the complexity of a parallel solution? Just run on your friendly neighborhood SMP or NUMA architecture, which will suffice as a first order solution for many problems. It was performance parallel programming that put the 'L' in lunatic fringe, and to raise 'L', we will ultimately need to examine the isolated graduate and undergraduate courses and weave the key components of parallelism into the fabric of all computer science courses beginning at the earliest level.
So let's get specific on possibilities for the first courses at the undergraduate level. The core of CS1 typically starts with the nomenclature, theory, and components of a simple algorithm and a basic block of execution. Flow of control is our next extension: branches, loops, and functions. Parallelism is easily a natural next layer. When we invoke parallelism, we might demonstrate by conjuring with threads and shared memory, since the use of shared memory will not perturb the student's simple notion of array-like memory. Additionally, the most frequently used shared memory mechanism, OpenMP, allows a gradual move from pure von-Neumann towards "pure" shared memory parallelism. This will cover fine-grain parallelism. A hunger for a different course of studies leads to the course-grained approach of distributed memory parallelism with MPI. Larger scale parallelism is naturally necessarily discovered by students as the problems of interest continue to grow.
The legal battlefield of Amdahl and Gustafson is a good next stop, guiding us into the study of data structures and algorithms via a perilous path littered with algorithms which scale poorly. Unchecked and unplanned parallelism will lead us to throttled resources whether Von Neumann's bottleneck or the more insidious communication costs incurred when trying to tame a parallel algorithm. Students can learn of dwarvish parallel patterns and associated phenomena such as a sequentially elegant quicksort quickly foundering in the presence of unamortized distributed memory costs.
This is a good time to consider how to squeeze weeks and weeks of new material on parallelism into a semester. Something has to give and something will give, but this is not a new dilemma. It is something we each faced when first crafting what we will cover in a course. It is something we face to a greater or lesser extent every time we re-teach a course given the pace of change in our discipline.
Now it is time for an anecdote. Tom interviewed Dave Paterson as part of the "Teach Parallel" series of interviews. The interview ranged over many topics, one of which was Dave's fourth edition of "Computer Organization and Design", which gloriously has parallel topics woven into each chapter. This led to talking with Dave's publisher about targeting an adaptation of the book towards community colleges, such as Contra Costa College where Tom teaches. The publisher was surprised to learn no dilution of the 703 pages was desired. Tom plans to cherry pick the material to use in his Computer Architecture course, which is a continuation of an experiment he's been running in all his courses, which allows the entire book is covered, just at varying depths. It is important for Tom to convey how to be a good student, part of which is being able to self-learn from practitioners' resources. This raises a good point: more textbook support for parallelism is going to make this whole process a heck of a lot easier. Unfortunately, it takes awhile to prime the curricular pump.
Computer architecture has traditionally incorporated elements of parallelism and concurrency; via semaphores and atomic operations, pipelines and multiple functional units, SMP architectures, and instruction and data paths. It has always been the place where the key hardware issues of the current architectures inform the software designed to run on it.
There are no easy answers, but there really are clear steps. We need to help students get to a place where they think of a single processing unit as just a special case of multiple processing units, much like they now learn to view a single variable as a special case of an array.
About the Authors
Thomas Murphy is a professor of Computer Science at Contra Costa College (CCC). He is chair of the CCC Computer Science program and is director of the CCC High Performance Computing Center, which has supported both the Linux cluster administration program and the computational science education program. Thomas has worked with the National Computational Science Institute (NCSI) since 2002. He is one of four members of the NCSI Parallel and Distributed Working group, which presents several three to seven day workshops each year, and helps develop the Bootable Cluster CD software platform, the LittleFe hardware platform, and the CSERD (Computational Science Education Reference Desk) curricular platform.
Paul Gray is an Associate Professor of Computer Science at the University of Northern Iowa. He created the Bootable Cluster CD project (http://bccd.net/) and provides instructional support for the National Computational Sciences Institute summer workshops on Cluster and Parallel Computing. He was SC08 Education Program Chair and serves on the executive committee for the SC07-11 Education Program.
Charlie Peck is the leader of the The Cluster Computing Group (CCG) at Earlham College, a student/faculty research group in the Computer Science department. The CCG is the primary design and engineering team for LittleFe, developers of computational science software, e.g., Folding@Clusters, and technical contributors to Paul Gray's Bootable Cluster CD project. Additionally, Charlie is the primary developer on the LittleFe project.
Dave Joiner is an assistant professor of Computational Mathematics in the New Jersey Center for Science, Technology, and Mathematics Education. The NJCSTME focuses on the training of science and math teachers with an integrated view of modern math, science, and computing. Additionally, Dave has collaborated since 1999 with the efforts of the Shodor Education Foundation, Inc., and the National Computational Science Institute. He currently serves as a Co-PI on the Computational Science Education Reference Desk, the Pathway of the National Science Digital Library devoted to computational science education.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.