Visit additional Tabor Communication Publications
October 13, 2011
Computer scientist Dennis Ritchie, who drove the design of the C programming language and the UNIX operating system, died over the weekend at the age of 70. While not a household name, Ritchie, along with cohorts Ken Thompson and Brian Kernighan, helped create the foundation for much of modern computing.
Ritchie is perhaps best known as the co-author of the book, The C Programming Language, which became the bible to C programmers ever since its publication in 1978. Kernighan, who co-authored the book, attributed Ritchie with the design of the language though.
While many criticized the admittedly dangerous features of C (dangling pointers, unbounded array copies, promiscuous data type casting, and so on), it became popular in part because it was a no-holds-barred type of language, and allowed the programmer free reign to pursue all sorts of mischief -- both good and bad. That reflected the US software culture of the times.
While UNIX and (especially) C are still prevalent in the current computing landscape, they are not nearly as dominant as they were in the last two decades of the 20th century. But the next generation of languages and operating systems, C++ and Linux in particular, have direct lineage back to those foundational technologies.
Even other languages, whether they resemble C syntax or not, usually come with C bindings so they can tap into the rich set libraries developed over the last three decades. Today, it's hard to imagine a software stack in the computing industry without C and UNIX and the technologies they spawned.
Both C and UNIX also benefitted from being born at the right time. It was in the late 70s and early 80s that marked the rise of the enterprise server build-out (not to mention the PC market). Importantly, these technologies enabled an open systems model for the industry. The UNIX OS was the first major open source software project, which led to its adoption at universities and research centers.
Ritchie regarded C and UNIX as historical accidents though. From his perspective, the world was just ready to embrace these technologies because of their ease of distribution and openness. "Somehow both hit sweet spots," he said, in an interview with ITworld back in 2000. "The longevity is a bit remarkable."
The open nature of the software enabled companies like IBM, HP, Sun Microsystems, and other OEMs to build platforms in which applications could be compiled with C and run on various UNIX OS flavors (HP-UX, IBM AIX, Sun Solaris) more or less unmodified. I say more or less because in truth, these commercial UNIX variants parted ways that often made it difficult to pass applications freely from one vendor's system to another. With the emergence of Linux, UNIX's open source offspring, the technology became more standardized, and in the process, even more widely disseminated.
For all of Ritchie's influence on the industry, he never really became a pop icon in the manner of Steve Jobs or Bill Gates. Unlike Jobs and Gates, who led their respective companies to fame and fortune, Ritchie was the archetype computer scientist -- the guy that came up with all the great ideas, upon which others built great empires.
To programmers of his time though, he was a hero. A Harvard grad, with majors in physics and applied math, Ritchie was one of the best and the brightest of the new breed of computer geeks making their way into the world of the late 60s. It was in 1969, while working at AT&T, that he, along with fellow colleagues Ken Thompson, Brian Kernighan, Douglas McIlroy, and Joe Ossana developed the UNIX operating system.
In 1983 he received the Turing Award. Five years later, he was elected into the National Academy of Engineering for the development the C language and for co-developing UNIX. He was subsequently awarded the National Medal of Technology in 1998. In January of 2011, Ritchie, along with Ken Thompson, were awarded the Japan Prize for Information and Communications for their UNIX work.
Despite these accolades, Ritchie remained humble. For would-be language inventors he had this advice:
"Don't have any expectations that anyone will use it, unless you hook up with some sort of organization in a position to push it hard. It's a lottery, and some can buy a lot of the tickets. There are plenty of beautiful languages (more beautiful than C) that didn't catch on. But someone does win the lottery, and doing a language at least teaches you something."
Posted by Michael Feldman - October 13, 2011 @ 5:42 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.