Visit additional Tabor Communication Publications
August 04, 2006
In this issue of HPCwire, I'd like to highlight a couple of interviews of two individuals who are promoting some productivity-enhancing skills for technical professionals -- I'm guessing that refers to everyone reading this article. Greg Wilson, an adjunct professor in Computer Science at the University of Toronto, discusses the importance of basic software engineering for scientists and engineers. John West, the director of the Major Shared Resource Center at the U.S. Army Engineer Research and Development Center, argues for the development of leadership skills for all technologists.
The lack of these "extracurricular" skills in scientists and engineers points to a common problem in the way we currently teach technologists. While our colleges and universities generally provide adequate, or even great, science and engineering training, other practical skills are traditionally left unaddressed. This is exacerbated by the increasing specialization of technology disciplines, which squeezes general educational requirements down to the most basic courses. This forces scientists and engineers to learn some of the basic skills they need for their careers "on the job." West and Wilson are offering some guidance to help fill in the gaps.
Software engineering for the rest of us
As computers become more capable of modeling complex phenomena, software is insinuating itself into all science and engineering endeavors. Today, almost all technology advancements depend upon software to one degree or another. With that in mind, Greg Wilson believes scientists and engineers should learn at least the basics of software engineering -- what he calls "software carpentry."
At this point, some of you might be thinking: "I already know how to program. What else is there?" Software engineering is about more than just how to write an application in a particular programming language. In many ways, that's the simplest part of the process. Software of any useful size or complexity has to be managed. You need to know how to track bugs and software versions, automate builds, develop unit tests, write reusable code and just generally manage the development process.
Wilson has created an online course for (non-computer-science) professionals that describes how to do this. According to Wilson's own estimates, it would probably take two to four weeks to go through all of the online material, depending on your pace. For a taste of what's in the course, read the interview (Software Carpentry for Scientists and Engineers) in this week's issue.
Leadership is not just for managers
It's a cruel irony that technical skills only take you so far in our technology-focused world. To really succeed, you have be able to manage your career. This truism is least obvious to the newest technology professionals right out of college. It is these individuals who are the main target of John West's book, "The Only Trait of a Leader." Much more than just "Management for Dummies," West's guide speaks to all levels of the organization.
In the book he attempts to dispel the notion of the stereotypical scientist/engineer who has a lot of technical depth, but lacks "personal" skills. According to West, not only does everyone have the capacity for leadership, everyone should develop this ability in themselves, whether they aspire to management or not. The book has an inspirational tone, but talks about the nuts and bolts of how to think like a leader and how to develop the specific skills that go along with that.
Why the focus on scientists and engineers? West believes technologists are a part of the "creative class" (my quotes, not his), who are the driving force behind the future of society. He also points out that creative people tend to be resistant to traditional management, but thrive under enlightened leadership.
So what is the only trait of a leader? Read the interview (Technology Leadership Begins With the Individual) in this week's issue to find out.
In other news...
IBM has apparently decided it likes AMD's Opteron processors a lot more that it originally thought. This week the company announced five new Opteron-based server products aimed at enterprise and high performance computing -- markets which are increasingly blurring into each other. The announcement came a couple of weeks before the expected release of the Rev F Opteron chips, which are the next generation of AMD processors that will be going into the new servers. By expanding its line of AMD-based offerings, IBM is following the success of high performance Opteron blades and servers from its main competitors, HP and Sun Microsystems, as well as the success of IBM's own LS20 blade.
Even though IBM was talking Opterons this week, it's still planning on offering Cell processor-based systems in the not-too-distant future. Presumably, these machines will be specifically targeted to the HPC market. In anticipation of this, other HPC folks have been busy investigating how to use the Cell processor most effectively for supercomputing workloads.
For example, Jack Dongarra and his team at the Innovative Computing Laboratory are continuing their work on exploiting single precision arithmetic on the IBM Cell processor. Dongarra has reported that his team has successfully implemented the method we reported in a recent HPCwire article (Less is More: Exploiting Single Precision Math in HPC) using the Cell chip. He says they are getting close to 100 GFlop/s for a double precision result on the 3.2 GHz Cell processor using their approach. According to him, this is 6.7 times faster then the Cell's double precision peak, half of the single precision performance, and over 8 times faster than the normal double precision performance. The results can be seen at http://icl.cs.utk.edu/iter-ref/.
If that doesn't excite your inner geek, I guarantee this will: There is a thoroughly entertaining MIT Technology Review interview with Seth Lloyd, which everyone should take a look at. Lloyd, a prominent leader and innovator in the field of quantum computing, discusses the premise of his latest book, "Programming the Universe," which proposes the idea that the universe is itself a quantum computer.
Says Lloyd: "We couldn't build quantum computers unless the universe were quantum and computing. We can build such machines because the universe is storing and processing information in the quantum realm. When we build quantum computers, we're hijacking that underlying computation in order to make it do things we want: little and/or/not calculations. We're hacking into the universe."
You can read the entire Seth Lloyd interview at http://www.technologyreview.com/read_article.aspx?id=17091&ch=infotech.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - August 03, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.