Visit additional Tabor Communication Publications
February 19, 2010
A recent article in InfoWorld about the shrinking population of older IT workers hit me especially close to home. As a former programmer -- pardon me, software engineer -- who left the field in my mid-forties, I was interested in learning why the IT industry tends to shed its older, more experienced workers. According to the article's author, Lisa Schmeiser, the reasons for this phenomenon are not what you might think.
For example, while age discrimination is alive and well, older workers, in general, have lower unemployment rates and higher salaries compared to their younger counterparts. In fact, the more money you make, the less likely you are to be unemployed. (This is true throughout the labor pool, not just the IT sector.) This would suggest that the industry should be well-populated with middle-aged techies. But apparently that's not the case. Schmeiser writes:
A late-1990s study by the National Science Foundation and Census Bureau found that only 19 percent of computer science graduates are still working in programming once they're in their early 40s. This suggests serious attrition among what should be the dominant labor pool in IT.
The idea that IT shops are filled with gray-bearded Unix geeks is a relic of the past. Today those same organizations are more likely to be populated with twenty-something Linux programmers.
Schmeiser cites some possible reasons the industry is shifting to a younger workforce, including a changing IT culture, the perceived lower price-performance of older workers, the devaluation of technical experience and skills, and the changing nature of the IT job. In fact, all of these are related, and have a lot to do with the shift from an engineering-focused culture to a business-focused culture as IT companies mature. In such an environment tech workers become commodities, with the older ones tending to become obsolete.
The attitude is summed up by this gem of a quote from former Intel CEO Craig Barrett, who was reputed to have said: "The half-life of an engineer, software or hardware, is only a few years." The implication here is that years of experience with one set of technology -- programming language, hardware architecture, what have you -- is not applicable to the next job, so there is little reason to value such experience.
The result is that the more skilled, more specialized, and more expensive workers tend to get laid off first during a precipitating event, like when a company downsizes or shifts to a new set of products and technologies. Absent a layoff, the workers themselves often leave of their own accord as they are forced to accommodate new responsibilities or change their work habits. Schmeiser concludes:
Thus, the harsh reality may be that IT jobs -- at least as they're defined now -- may be perpetually entry-level.
The entire text is worth a read, especially if you're a young programmer or engineer who might be wondering what your career has in store for you. Of course, a follow-up piece on how to manage such a career would surely be appreciated. But that's likely to require a much longer article.
Posted by Michael Feldman - February 19, 2010 @ 8:38 AM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.