Visit additional Tabor Communication Publications
November 27, 2008
In our supposedly tech-driven economy, it's common to hear about computer professionals who have lost their jobs and are unable to find new work in their field. And this was occurring even before the recession. Is the IT industry really that much at odds with its own labor market? Surprisingly, yes.
In a recent InfoWorld advice column hosted by Bob Lewis, a reader talks about an increasingly hostile tech labor marketplace -- not only for workers with "legacy" skill sets, but even for those with more recent experience:
[I]t's not just the COBOL and Fortran programmers, the OS/360 and SCOPE dinosaurs. It's also the software architects; data-base architects; system and network administrators; PHP, Python, Ruby on Rails, and Objective-C software engineers; and heavy metal engineers who were presenting papers at national and international conferences one day, and pariah the next.
The reader follows up with a familiar observation about the industry's indifference to providing employment continuity for the workforce:
The industry [executives have] made it clear. [They are] not interested in re-training the current workforce, which is likely adequate for its needs. No, it wants fresh bodies, preferably young or beholden ones willing to accept entry-level wages for long hours and who are either burdened with few family obligations or willing to pass them over... for the most part, companies are unwilling to re-train experienced programmers to fill available slots...
I've written about this on a few occasions, myself, in the context of the H-1B visa program for non-U.S. workers. But something else struck me when I read Lewis' response:
Since I try to avoid recommending solutions that require legislation, and also try to avoid moralizing in my writing, I recommend courses of action based on this being how the world works right now. People are products in the employment marketplace. If someone can't find a job, that means for one reason or another that person isn't a competitive product. The problem might be marketing, packaging, pricing, or a perceived lack of quality. Whatever it is, this is no different from any other marketplace -- it's up to the seller to package, price and market a product people want to buy.
Lewis says he's not unsympathetic to the techie's plight; he's just trying to be honest. And he makes a good a point.
But casting people as products is not only demoralizing, it's wrong-headed, and it reflects some unfortunate attitudes in the IT community. Specifically, the conventional wisdom is that maximizing ROI takes precedence over maximizing innovation. While that philosophy may work in a more mature industry that isn't subject to a lot of technological turnover, like say bubble gum manufacturing, in the computing business it's just short-sighted.
Since tech workers are the ones that design hardware, write software, and provide services, under-investing in them has some regrettable effects. The most visible example of this is the permanent "software crisis," which is currently playing out in the industry's attempt to apply parallel programming to the new raft of multicore and multiprocessor platforms. Moore's Law continues to double raw processing power every 18 months or so, but only a fraction of that is realized at the application level. But wasting cheap CPU cycles seems to make more sense than applying more human ingenuity to the problem.
To be fair, firms like Intel and Microsoft, along with help from the government, are investing a ton of money in parallel programming R&D, but most companies are willing to let this be somebody else's problem. The answer for the industry is going to require the adoption of new software platforms and training (or retraining) workers. And that's going to filter down to everyone.
The relocation of computing into the cloud is another challenge that's going to require a lot of new software development, infrastructure buildout, and a whole new industry to service it. Hardware is the easy part. It's the extra labor that's going to be the bottleneck. If the IT community convinces itself and its customers that computing will be essentially free once it moves into the cloud, there will be little incentive to invest in human resources to make it happen.
I'm not suggesting that simply retraining old techies is going to be a magic bullet. But there has to be some realization that the industry cannot rely solely on cheap processors, "free" software, and disposable IT workers to create innovation. Ultimately, IT is a labor-intensive industry. The purpose of computer systems is not to eliminate jobs, it's to create value and increase productivity.
At the Supercomputing Conference and Expo last week, there was a panel discussion on disruptive technologies for exascale systems. It was revealing that the four technologies highlighted were all hardware-focused: flash storage, photonic communications, 3D chip stacking, and quantum computing. It's easy to become seduced by these inventions. Once they're designed and implemented, they can be mass-produced, with little human intervention. As expensive as semiconductor fabs are, they can work 24/7 and don't require health insurance and retirement benefits.
But clever software can make even great hardware humble. D-Wave CTO Geordie Rose, the panel's quantum computing advocate, argued that new algorithms can have a much bigger payoff than more powerful silicon. He noted that using Pollard's rho algorithm from 1977, it would take 12 years to factor a 90-digit number on a modern-day 400 teraflop Blue Gene supercomputer. But using the newer quadratic seive algorithm, it would take just 3 years to perform the same operation on a 1977 Apple II computer. When you consider the multi-million dollar investment that went into the Blue Gene supercomputer compared to the probable investment that went into developing the new algorithm, you can get some sense of the industry's misplaced priorities.
Posted by Michael Feldman - November 26, 2008 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.