Software Carpentry for Scientists and Engineers

By Nicole Hemsoth

August 4, 2006

HPCwire recently had the opportunity to talk with Greg Wilson, author of the book “Data Crunching.” In this interview, he describes his “Software Carpentry” course, a primer for scientists and engineers who are not professional software developers. This audience often spends a lot of time developing, debugging and maintaining programs, but doesn't have the computer science background to do those tasks effectively or efficiently. Wilson teaches them about the software development process as well as basic proficiency in higher-level programming languages like MATLAB and Python. This allows scientists and engineers to increase quality and productivity, while at the same time taking advantage of the latest advancements in hardware.

HPCwire: What is the history behind the “Software Carpentry” course you've developed and who is the course targeted for?

Wilson: The idea for the course goes all the way back to work I did at the Edinburgh Parallel Computing Centre in the late 1980s and early 1990s. My job was to help parallelize scientific software, but after a while, I realized the real problem was that most scientists and engineers simply didn't know how to develop code efficiently: they'd never heard of version control, they didn't know how to test things, they did repetitive tasks by hand instead of writing little scripts for them, and so on.

Starting in about 1997, Brent Gorda (now at Lawrence Livermore National Laboratory) and I began teaching short courses on basic software development skills at Los Alamos National Laboratory. Response was mixed: some participants, particularly younger ones, ate it up, while others felt that what they'd been doing for the last twenty years was good enough. Over the next three or four years, we figured out which parts of standard industrial software development models actually made sense for HPC, and which didn't. I then taught the course at a few other places (most recently the University of Toronto), then received a grant from an open source organization called the Python Software Foundation to tidy up the notes and make them freely available over the Web.

In its current form, the course is aimed at people who have backgrounds in science or engineering, but not computer science, and who now find themselves having to write or maintain complex programs. It covers basic tools, including the Unix shell, make and version control, introduces students to the Python scripting language, then goes on to look at data crunching, integration, web programming, security, best practices for small teams and other practical issues. All the material is freely available at our website, http://swc.scipy.org, and like most open source projects, we'd welcome more contributions.

HPCwire: How much time would it take for the average non-professional programmer to take the entire course?

Wilson: It takes me about an hour to deliver each of the two dozen or so lectures, and I expect that students will spend two to three hours on exercises for each. When I teach the course again this coming winter, I'll be recording the lectures and posting the MP3s to the web. Then people outside Toronto who want to follow along would be very welcome to — we had about 70 out-of-towners downloading the lectures and chipping in on the mailing list last fall.

HPCwire: What is the difference between “Software Carpentry” and software engineering?

Wilson: Scale. This course doesn't try to teach people how to build the electronic equivalent of the Channel Tunnel; it's more about drywalling the basement. If you're developing code on your own for your thesis, or working in a team of half a dozen people for a year or two, most of what you need to know is in this course.

HPCwire: As software developers, what do scientists and engineers seem to lack most?

Wilson: A mental picture of how they ought to be working. Once you've seen a project that has version control, bug tracking, and a few other odds and ends integrated together, and whose members are actually writing unit tests, it's obvious that you should be working that way. The problem is most scientists and engineers never see a project that's being run that way. There are a few exceptions — high-energy physics, for example, has a much better track record than most disciplines — but those are exceptions.

HPCwire: Can you give us some of the highlights of the course you've developed?

Wilson: The highlight for me is the last two lectures, which cover the development processes and tools that small teams should use. In a way, everything before that is plowing the field — those lectures are the seeds we really want to plant in students' minds.

What students seem to appreciate most, though, is the discussion of the interplay between quality and productivity. A lot of computational science these days isn't really science: few computational “experiments” are reproducible in any sense, and only a handful of scientists and engineers have any idea how reliable their programs are. I'm convinced that sooner or later this is going to lead to some sort of “computational thalidomide,” but as long as journal and conference editors don't ask questions about the software used to produce results, people who invest time in QA are at a disadvantage compared to people who don't. I've therefore learned to emphasize that investing in quality up front makes programmers more productive: if you do more testing, you'll have to do less debugging, and you'll get more results published per unit of time. That is something that grad students understand.

HPCwire: In the HPC space, there's a great deal of reliance on C and Fortran (with MPI). Yet most HPC end-users are not professional software developers and would probably be more productive by using higher-level programming environments like MATLAB, Python, etc. What's your perspective on this?

Wilson: I completely agree, and I think that people like Paul Dubois (formerly at LLNL) have demonstrated how much more science you can get done if you write most of your application in a high-level language, then rewrite the performance-critical bits in something like C once you're sure they're actually performance-critical.

You can actually put some math behind this if you want to. Everyone in HPC knows Amdahl's Law: if the serial fraction of your code is 's', and the sequential runtime is 't', then the runtime on 'p' processors is st + (1-s)t/p, and the speedup as 'p' goes to infinity is limited by '1/s'. What most people fail to recognize is that Amdahl was only thinking of a single run of the program — which isn't surprising, since he was a hardware engineer. In the real world, programs have to be developed, debugged and maintained, and none of these processes is sped up very much by parallel hardware.

So, if you let 'T' be the program's total runtime over its life, and 'D' be its development time, the lifetime speedup on 'p' processors is sT + (1-s)T/p + D, and the speedup with infinite hardware is limited to 1/(s + D/T). This says that the more time you spend developing rather than running, the less advantage you're actually taking of all that hardware you spent so much money on. If students come out of the course understanding this, and knowing how to reduce 'D', I'll be happy.

HPCwire: How do you think computer hardware advances — especially the rise of multi-core processors — will affect programming tools and the way programming is done?

Wilson: As things stand, they're going to increase development time for all but embarrassingly-parallel applications. For example, I've been writing parallel programs for twenty-one years now, and still haven't found a parallel debugger that actually helps. New debugging and analysis techniques, like those discussed in Andreas Zeller's recent book [1], are going to be essential, but unless scientists and engineers master the tools and techniques we already have, all we're going to see is more and more wasted cycles.

[1] Andreas Zeller: “Why Programs Fail”. Morgan Kaufmann, 2005.

—–

Greg Wilson Pic


Greg Wilson holds a Ph.D. in Computer Science from the University of Edinburgh, and has worked on high performance scientific computing, data visualization, and computer security. His most recent book is “Data Crunching” (Pragmatic Bookshelf, 2005), and he is now developing a course on basic software development skills for scientists and engineers. Greg is a freelance consultant, a contributing editor with “Doctor Dobb's Journal” and an adjunct professor in Computer Science at the University of Toronto.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire