Visit additional Tabor Communication Publications
October 07, 2005
Two months ago, HPCwire broke the news when Steve Scott, chief technology officer at Cray Inc., decided to leave the company. As one of HPC's most respected computer architects -- emphasized by being voted to HPCwire's People To Watch 2005 list-- we viewed his exit as a great loss to the world of high performance computing. So HPCwire was elated to learn that Scott has rejoined the supercomputer maker. In this exclusive interview, Scott sheds some light on the reasons for his return. -- Peter Meade, HPCwire editor
HPCwire: When you left Cray you said you'd worked there since you were a grad student intern and wanted to explore other possibilities. Did you have a chance to do that?
Scott: I did. I talked with several companies inside the HPC industry and some exciting companies whose business is far removed from HPC. It was a great experience, something I never had a chance to do before.
HPCwire: Did you consider taking another job?
Scott: There were several that were interesting.
HPCwire: But you decided to return to the CTO role at Cray. Why?
Scott: I'd been working non-stop on one Cray machine after the other for more than a decade. Taking some time off and exploring other possibilities helped put things in perspective. I realized Cray was uniquely positioned to innovate and drive high-end computing in the right direction, plus Cray is making the right changes at a corporate level to become an industry leader again. I wanted to continue to be a part of that. When I left, the company said the door was open if I changed my mind, so I took advantage of that.
HPCwire: As Cray's CTO, what will you focus on?
Scott: I'll be reporting directly to Peter Ungaro, our CEO, so my role will be expanded across all of Cray's activities. First: the DARPA HPCS program's coming into the home stretch. I'll be leading that effort. I'm excited by what Cray is proposing for phase three. I'll also have primary responsibility for driving Cray's product roadmap, working closely with various Cray teams to make sure that our products work well together and meet customers' needs. And, of course, I will continue to engage with customers, government partners and the larger research community.
HPCwire: In the short time you've been away from Cray, has anything changed?
Scott: The main difference is the new products are farther along and the outlook for the future is better than ever. I read in HPCwire the interview with Thomas Zacharia of ORNL a few weeks ago, and I think he summed things up well. On both the Cray X1E and the Cray XT3 systems, users are already doing breakthrough science that can't be done on any other machine today. These products are still fairly new, and in the case of the XT3, there was no direct predecessor like the Cray X1 for users to gain experience with. In my mind, that's what Cray's about -- building machines that stand out from the crowd of HPC products and that deliver higher performance on real-world problems. The Cray XD1 plays the same role in its category.
We've also ramped up development on some next-generation products that will be coming out between now and the HPCS system at the end of the decade. Cray is breaking new ground in the area of scalable, heterogeneous computing that will make supercomputing more effective than ever.
HPCwire: We spoke with [Cray board member and former CEO] Jim Rottsolk last week and he remains quite optimistic about the future for Cray. That picture has gotten considerably brighter with your return. In your view, what's next for Cray?
Scott: We're focusing hard on the HPCS program, as I mentioned, and we are preparing to build some very large systems. We have plans to break the 100 teraflop performance level in 2006 and go to a petaflop in 2008. Our plans include integrating scalar and vector technology to best fit the wide range of applications out there. For customers focused on high-end scientific and engineering breakthroughs, we have a very exciting roadmap.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.