Musings on the Nature of Software

By Michael Feldman

June 16, 2006

“Software bugs are part of the mathematical fabric of the universe. It is impossible with a capital 'I' to detect or anticipate all bugs.”

So says Ben Liblit, an assistant professor of computer sciences at the University of Wisconsin-Madison. The article which describes his work is in this week's issue of HPCwire.

Liblit's method to detect software misbehavior enlists people with real applications to help attack bugs in their natural habitat. He does this by allowing users to define the nature of the bugs themselves — crashing, hanging, invalid output, etc., and then instrumenting the application code accordingly so that it can capture the error condition as it occurs. The results are then gathered and analyzed to help identify the bugs and correct the code.

Today Liblit's work is being used by the open source community as a way to do more rigorous post-deployment debugging on a variety of applications. Apparently it has also attracted the attention of IBM and Microsoft.

And me as well. I recently contacted Liblit to get his perspective on why software continues to be such a problematic piece of the information technology puzzle. In high performance computing, we tend to focus on the challenges of injecting parallelism into our code, but HPC also shares the larger problem of overall software quality. And as HPC applications become more complex in order to address multifaceted problems, the challenge to develop quality software will increase.

Liblit illustrates the basic limitation of software using the “halting problem,” which can be described as follows: Given a program and its initial input, determine whether the program ever halts or continues to run forever. Seventy years ago, Alan Turing mathematically proved that an algorithm to solve the halting problem cannot exist. Essentially what he was saying was that if you were to try to write a program that would tell you whether other programs hang or not, there is no way that such a program, itself, is guaranteed not to hang. This may seem like just an inconvenient factoid for computer scientists, but it reveals a fundamental problem for anyone who develops software.

“Mathematically it is impossible to take a non-trivial piece of code and prove that it never hangs,” says Liblit. “It's not that we haven't been smart enough to figure out how to do it; we're smart enough to have figured out that it can't be done!”

Liblit goes on to characterize software as a chaotic system, with extreme sensitivity to initial conditions. That means it's very hard to predict how it is going to behave during execution. And that's why, despite all sorts of software testing methodologies that are being used today, bugs continue to inhabit our production code.

This got me to thinking about the nature of the hardware-software dichotomy, which seems to be especially noticeable in high performance computing, but exists across the entire IT industry. And that leads to the question: Why is hardware advancing so rapidly and software not? As processors increase in performance every year, the code running on them is not much better than it was ten years ago. There is no Moore's Law for software.

This is not to suggest that hardware doesn't fail. But hardware failures mostly involve physical breakdowns — crashing disks, dropping bits, etc. The Mean Time Between Failure (MTBF) characteristic is usually well accounted for during system design. For example, Google's cluster management software expects servers to malfunction on a regular basis and can reroute search engine processing rather transparently. These types of problems are manageable because they're predictable.

Hardware logic errors are more rare, but they do occur. For example, the famous Pentium floating-point-divide bug of 1994 precipitated a chip recall. But why aren't these types of problems seen more frequently? There may be a few things at work here. One is that there's so much more software logic than hardware logic in the world. For every microprocessor, like the Pentium, there are thousands or tens of thousands of applications. And the software developers that wrote those applications probably didn't perform the level of testing that Intel applied to its Pentium chip design.

Another difference is that many applications are more complex than a typical CPU — in some cases, much more complex. On my PC at work, the Windows XP OS and some of the associated applications are regularly updated with patches, presumably to fix software problems. To its credit, XP is much more stable than its predecessors as far as crash frequency, but new bugs are being discovered weekly. This is not too surprising. XP along with the applications on a typical PC workstation represent tens of millions of lines of source code.

Don't make the mistake of thinking processors are getting more complex because the transistor count is going up. Today, the increase in transistors mostly has to do with adding cores and increasing cache size. These don't add logic complexity. The new “Montecito” Itanium microprocessor contains about 1.7 billion transistors, but only about 20 million or so are in the CPU logic. In fact, the move to multi-core should actually make the hardware simpler, since each core is expected to do proportionately less work.

Software is heading in the other direction. As users demand more features and functionality from their applications, the code gets more ever more complex. Window NT 3.1 had around 6 million lines of source code; Windows XP contains over 40 million lines. But as programs become more complex, they also become more susceptible to bugs. The public perception is that the hardware makers are heroes, while the software developers have let us down.

Even within the industry, there seems to be a perception that hardware and software are symmetrical elements of a computing system. The expectation is that both technologies should be able to advance in concert. But the symmetry is an illusion. Processors have become multi-core as part of a well-defined technology roadmap. Meanwhile, the corresponding move to application parallelism has become a crisis. Software seems to be much more resistant to engineering than hardware.

“I don't know that we're doing a very good job of communicating that to the public, and maybe to software engineers,” says Liblit. “I don't think software engineers appreciate the near impossibility of doing their job right.”

But it's not hopeless. Software is getting more robust. Again, just look at XP. Applications don't have to be perfect to be useful. The text editor program I'm using to compose this article occasionally goes a little nutty and adds a bunch of blank characters at the end of the file. I just delete them and go on.

But some users can't afford to be so forgiving. If your application is managing a stock portfolio for thousands of investors or controlling a nuclear warhead, losing track of data can have serious consequences. Code for mission-critical systems must be held to a higher standard — safety-critical code, even more so. Productivity is one thing, but when someone's money or life is at stake, buggy software is not an option. Software engineering advancements are truly needed. Are any solutions are emerging? The answer to that will have to wait for a future article.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of computing capability in support of data analysis and AI workload Read more…

By Tiffany Trader

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This