Software Carpentry for Scientists and Engineers

By Nicole Hemsoth

August 4, 2006

HPCwire recently had the opportunity to talk with Greg Wilson, author of the book “Data Crunching.” In this interview, he describes his “Software Carpentry” course, a primer for scientists and engineers who are not professional software developers. This audience often spends a lot of time developing, debugging and maintaining programs, but doesn't have the computer science background to do those tasks effectively or efficiently. Wilson teaches them about the software development process as well as basic proficiency in higher-level programming languages like MATLAB and Python. This allows scientists and engineers to increase quality and productivity, while at the same time taking advantage of the latest advancements in hardware.

HPCwire: What is the history behind the “Software Carpentry” course you've developed and who is the course targeted for?

Wilson: The idea for the course goes all the way back to work I did at the Edinburgh Parallel Computing Centre in the late 1980s and early 1990s. My job was to help parallelize scientific software, but after a while, I realized the real problem was that most scientists and engineers simply didn't know how to develop code efficiently: they'd never heard of version control, they didn't know how to test things, they did repetitive tasks by hand instead of writing little scripts for them, and so on.

Starting in about 1997, Brent Gorda (now at Lawrence Livermore National Laboratory) and I began teaching short courses on basic software development skills at Los Alamos National Laboratory. Response was mixed: some participants, particularly younger ones, ate it up, while others felt that what they'd been doing for the last twenty years was good enough. Over the next three or four years, we figured out which parts of standard industrial software development models actually made sense for HPC, and which didn't. I then taught the course at a few other places (most recently the University of Toronto), then received a grant from an open source organization called the Python Software Foundation to tidy up the notes and make them freely available over the Web.

In its current form, the course is aimed at people who have backgrounds in science or engineering, but not computer science, and who now find themselves having to write or maintain complex programs. It covers basic tools, including the Unix shell, make and version control, introduces students to the Python scripting language, then goes on to look at data crunching, integration, web programming, security, best practices for small teams and other practical issues. All the material is freely available at our website, http://swc.scipy.org, and like most open source projects, we'd welcome more contributions.

HPCwire: How much time would it take for the average non-professional programmer to take the entire course?

Wilson: It takes me about an hour to deliver each of the two dozen or so lectures, and I expect that students will spend two to three hours on exercises for each. When I teach the course again this coming winter, I'll be recording the lectures and posting the MP3s to the web. Then people outside Toronto who want to follow along would be very welcome to — we had about 70 out-of-towners downloading the lectures and chipping in on the mailing list last fall.

HPCwire: What is the difference between “Software Carpentry” and software engineering?

Wilson: Scale. This course doesn't try to teach people how to build the electronic equivalent of the Channel Tunnel; it's more about drywalling the basement. If you're developing code on your own for your thesis, or working in a team of half a dozen people for a year or two, most of what you need to know is in this course.

HPCwire: As software developers, what do scientists and engineers seem to lack most?

Wilson: A mental picture of how they ought to be working. Once you've seen a project that has version control, bug tracking, and a few other odds and ends integrated together, and whose members are actually writing unit tests, it's obvious that you should be working that way. The problem is most scientists and engineers never see a project that's being run that way. There are a few exceptions — high-energy physics, for example, has a much better track record than most disciplines — but those are exceptions.

HPCwire: Can you give us some of the highlights of the course you've developed?

Wilson: The highlight for me is the last two lectures, which cover the development processes and tools that small teams should use. In a way, everything before that is plowing the field — those lectures are the seeds we really want to plant in students' minds.

What students seem to appreciate most, though, is the discussion of the interplay between quality and productivity. A lot of computational science these days isn't really science: few computational “experiments” are reproducible in any sense, and only a handful of scientists and engineers have any idea how reliable their programs are. I'm convinced that sooner or later this is going to lead to some sort of “computational thalidomide,” but as long as journal and conference editors don't ask questions about the software used to produce results, people who invest time in QA are at a disadvantage compared to people who don't. I've therefore learned to emphasize that investing in quality up front makes programmers more productive: if you do more testing, you'll have to do less debugging, and you'll get more results published per unit of time. That is something that grad students understand.

HPCwire: In the HPC space, there's a great deal of reliance on C and Fortran (with MPI). Yet most HPC end-users are not professional software developers and would probably be more productive by using higher-level programming environments like MATLAB, Python, etc. What's your perspective on this?

Wilson: I completely agree, and I think that people like Paul Dubois (formerly at LLNL) have demonstrated how much more science you can get done if you write most of your application in a high-level language, then rewrite the performance-critical bits in something like C once you're sure they're actually performance-critical.

You can actually put some math behind this if you want to. Everyone in HPC knows Amdahl's Law: if the serial fraction of your code is 's', and the sequential runtime is 't', then the runtime on 'p' processors is st + (1-s)t/p, and the speedup as 'p' goes to infinity is limited by '1/s'. What most people fail to recognize is that Amdahl was only thinking of a single run of the program — which isn't surprising, since he was a hardware engineer. In the real world, programs have to be developed, debugged and maintained, and none of these processes is sped up very much by parallel hardware.

So, if you let 'T' be the program's total runtime over its life, and 'D' be its development time, the lifetime speedup on 'p' processors is sT + (1-s)T/p + D, and the speedup with infinite hardware is limited to 1/(s + D/T). This says that the more time you spend developing rather than running, the less advantage you're actually taking of all that hardware you spent so much money on. If students come out of the course understanding this, and knowing how to reduce 'D', I'll be happy.

HPCwire: How do you think computer hardware advances — especially the rise of multi-core processors — will affect programming tools and the way programming is done?

Wilson: As things stand, they're going to increase development time for all but embarrassingly-parallel applications. For example, I've been writing parallel programs for twenty-one years now, and still haven't found a parallel debugger that actually helps. New debugging and analysis techniques, like those discussed in Andreas Zeller's recent book [1], are going to be essential, but unless scientists and engineers master the tools and techniques we already have, all we're going to see is more and more wasted cycles.

[1] Andreas Zeller: “Why Programs Fail”. Morgan Kaufmann, 2005.

—–

Greg Wilson Pic


Greg Wilson holds a Ph.D. in Computer Science from the University of Edinburgh, and has worked on high performance scientific computing, data visualization, and computer security. His most recent book is “Data Crunching” (Pragmatic Bookshelf, 2005), and he is now developing a course on basic software development skills for scientists and engineers. Greg is a freelance consultant, a contributing editor with “Doctor Dobb's Journal” and an adjunct professor in Computer Science at the University of Toronto.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire