Intel Brings Parallel Computing to High School

By John E. West

July 30, 2009

Earlier this month Intel announced it was helping lead a parallel programming experience for high school students. The three-day “Clubhouse Parallel Universe Boot-Camp” was held at Brooklyn Technical High School (BTHS). This idea is consistent with Intel’s overall drive to help develop the expertise that applications developers — and ultimately users — need to get the most out of the company’s chips. There is a clear business driver here, but in this case, the business driver lines up well with the broader societal goals of enabling users and developers to do more with technology.

The project started with Jeffrey Birnbaum from the Bank of America. Birnbaum has lots of experience working on lock-free and parallel programming techniques for “low latency high throughput messaging systems” of the kind you find in finance. Birnbaum’s idea started with interactions he had with a high school student interested in parallel programming, and he saw an opportunity to start at the high school level to teach students to “think parallel.”

“If students start thinking parallel when they are first introduced to software development, it opens the door to new creative solutions that more experienced programmers might not attempt. In essence, the student minds have not been spoiled by old serial programming methodologies and experimentation when multi-core multi-socket systems did not exist,” says Birnbaum. “We are at the beginning of a new age in programming where the exploitation of advanced multi socket multicore systems to solve new and interesting problems requires developers who combine a “think parallel” mindset with the skill to execute.”

Birnbaum hooked up with Intel’s Bob Chesebrough, and they got together to implement the vision in the pilot program at BTHS. Fifteen students participated in the first workshop that wrapped up last week, and they wrote real code on hardware donated by Intel, IBM, and BLADE Network Technologies.

I think these kinds of efforts are tremendously important for the future of our community. Intel’s Head Software Evangelist, James Reinders, took part in the event and gave me some time over email to answer a few questions.

—–

HPCwire: Why target high schoolers? Parallel programming has traditionally been left to college. Are the students ready to grasp the concepts?

James Reinders: Because they are ready to learn it as they learn programming, there is no reason to wait. Of all the programmers, these are the ones that will be doing parallel programming their entire careers. Virtually every new computer is ready for parallel programming — multicore processors are everywhere now. So, programming is parallel programming — it is fundamental.

I’d say parallel programming in the past has been a graduate level activity because parallel programming has been a niche. It was not graduate level because it is too hard — it was graduate level because the machines to programming in parallel were scarce, and the topic affected only a minority of programmers. Now that it is fundamental, it is time to introduce early on in learning about programming. It should be part of teaching computer programming, not tacked on afterwards.

My first languages were assembly and BASIC. Neither taught structured programming nor data structures particularly. My professors whined about getting students that needed to be re-taught — some said it was worse than having us come in knowing nothing. I’m not sure I agree — but it’s absolutely true that teaching minds that are uncluttered has advantages.

I can tell you that the students we got were definitely ready for the material!

HPCwire: Regarding the involvement of Bank of America, it’s odd (but great) to see someone from the user side of the community so active in leading this effort. How did the concept develop? What made Intel want to team up with Birnbaum and BoA on this?

Reinders: Jeff Birnbaum is an energetic guy who’s hard to say “no” to! I really enjoy his enthusiasm — he knew we had an interest in teaching students but we were focused on universities. About the time we were crossing the one-thousand universities in our teach-parallel program (we started with 40 universities in 2006) — Jeff was the one that told us we should take this to a high school. For a few of us, he didn’t need to twist our arms, and we knew that actions speak louder than words.

Jeff pushed us to think about this seriously, secured some equipment support from IBM and Blade Network Technologies, and taught some himself on our day 3 — bringing together the concepts and making a deep application of it in analyzing some real code he shared from his work. Jeff also encouraged us to come out to New York City to teach a High School — and that seemed to be a fine idea, especially after we had the great fortune to hook up with Randy Asher, the principle at Brooklyn Technical High School.

HPCwire: Is this a one-off event, or will there be others? How did you get hooked up with Brooklyn Technical High School?

Reinders: I’m sure we’ll do more, but I’m not sure the exact form. We will be taking the student and teacher feedback, and seeing what we can do. Our small team of engineers from Intel that taught this are top notch experts with full time jobs. We might be able to sneak out of our jobs a time or two without our management missing us too much, but we might be missed if we tried to do this a lot more! With the universities, we started small and learned how to scale. We learned what worked and how to teach others to teach by sharing what we learned and developed. That’s the partnership that worked with universities. Something like that might be in the future for high schools. I hope so. Lots of work lies ahead to make it happen.

We actually had no pre-existing relationship with Brooklyn Technical High School — we got a few contacts in the New York school system and made a few cold calls. Next thing you know, we talked with Randy Asher. He’s the type of principle you want at a school — a huge advocate for bringing in challenge for the student and infectious in his commitment to make things happen. We were hooked. Randy’s the guy already talking about doing this again, and for more time (to earn credits), etc. I know all of us from Intel found it rewarding, and with Jeff and Randy pushing now, well, we probably have to do this again!

HPCwire: The workshop looks like three full days of student time — how did the school respond to this? Do the students have to make up the time?

Reinders: We did it during the summer, so the students didn’t have to take time off school. Doing it during school might be an option next time. One way we have to compete with summer jobs and summer vacations, and the other way we’d need to have them take off time from school for school (seems a bit funny). I’m not sure — but I suspect one or two weeks in summer will tend to work out better until we figure out how to incorporate content into regular computer science classes in high school. I think we’re learning things to let us consider both.

HPCwire: Now that the workshop is concluded can you give me some reactions from the teachers and students? Did they seem to get it? Enjoy it? Were they good at it? What did Intel learn about teaching parallel in this age group that will help shape your next event?

Reinders: We had 16 high school students plus five high school teachers in our class. We ultimately mixed teachers and students — and that worked very well. We did a short three-day — but could easily have expanded to a couple weeks.

One student told us the first day was boring, another said it was the best thing ever. They both actually got something out of it, but had different expectations. Their direct feedback is helpful. There is a lot of “let’s just DO IT” energy in the room. They had great attention span, and were very engaged, but were constantly eager to work on the computers. More lab time would have been popular I think. It was a little less than half the classroom time over three days. We split the rest of time between lectures and some hands-on exercises to simulate computer algorithms with activities — to help make things intuitive.

They all did very well. No student was “lost” by any means. Each exercise challenges each student in a different way — but ultimately they all understood the concepts and learned what we were hoping they would. By the third day, we had them changing our “Destroy the Castle” program (http://software.intel.com/en-us/articles/code-demo-destroy-the-castle/) — and we saw a lot of knowledge being used that they didn’t have the first day of the class. They added parallelism and improved the game a great deal (we gave it to them with the parallel programming removed from our downloadable version).

What did we learn? We validated that teaching at the high school level is appropriate. Those of us teaching got a little better handle on pacing and that will help us. We really reinforced the need to present basic concepts multiple ways (to drill home what a “data race” is, and “task decomposition”). We debated the relative merits, timing, etc. — I’m sure we have a better feel for it now. We also know if we expand time for the class, we would expand the hands-on time to be a higher percentage of the time. I’m interested to talk more with the high school teachers we had in our class, and see what more feedback they have. I think we have to digest the experience and feedback more — and we’ve learned a lot that will help us next time.

Bottom-line: if someone else considers doing this, we have experience that we would share to help others! Now, if we can find the time to write it down, we will. Hopefully soon.

HPCwire: Was the hardware taken on site? Were there logistical challenges (from the mundane, like cabbing a cluster across NY, to the specific, like was there enough power) and how did you address them?

Reinders: As funny as a 32-core cluster in a New York taxi would have been, we didn’t go that way. We used the power of the Internet. Blade Network Technology loaned space and expertise in their Santa Clara facility — and IBM provided the 32 core cluster, and Intel provided four 8-core machines. We used them remotely over the Internet from New York. In New York, we used BTHS machines (every student had a dual-core machine) and we brought along 16 dual-core laptops, and an 8-core machine. Those we shipped, and they were just here at BTHS for us. One of the benefits of being at BTHS was that they had plenty of capability for power and cooling for us, that was easy. It is often a concern as you look where to teach, but at BTHS that was not an issue — they have fantastic facilities, which made this easy. And their alumni association kicked in money to feed the students lunch — which was very much appreciated too.

HPCwire: I suspect this effort might inspire others to do something similar in their own community. Would you welcome others to adopt the curriculum in their own communities? What kind of support would there be for those wanting to take such a step?

Reinders: I’d love to be contacted by people with a serious interest in doing this at other high schools. They can drop a note to me at Intel — e-mail: [email protected].

HPCwire: I know that Intel has a big effort in education, so I’m going to do the natural thing and ask you to boil it down to a couple sentences so readers are more aware of the range of your efforts.

Reinders: We know that young people are the key to solving global challenges, and a solid math and science foundation coupled with skills such as critical thinking, collaboration, and digital literacy are crucial for their success. That is why we get directly involved in education programs, advocacy, and technology access to enable tomorrow’s innovators. Intel’s education outreach includes K-12 education, education competitions, high education and outside the classroom programs — see http://intel.com/education. Focusing on our outreach for computer science instruction in particular — we have our Intel Academic Program (“Teach Parallel”) that instructors can learn more about at http://intel.com/software/college.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and su Read more…

By Rob Johnson

How Fast is Your Rubik Solver; This One’s Probably Faster

July 18, 2019

In the race to solve Rubik’s Cube, the time-to-finish keeps shrinking. This year Philipp Weyer from Germany won the 10th World Cube Association (WCA) Championship held in Melbourne, Australia, with a 6.74-second perfo Read more…

By John Russell

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose Read more…

By John Russell

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This