Visit additional Tabor Communication Publications
February 01, 2002
This is the first of a series of executive level interviews HPCwire will be conducting throughout the year as part of our 10-year anniversary. In the coming months, HPCwire will select executives from throughout the HPC community and conduct in-depth interviews with them. Each of the interviews will begin with a look at the business side of their relationship to HPC and then go into a more personal (and hopefully colorful) profile to give HPCwire readers a more intimate look at those shaping our industry.
Since the impetus for these interviews is HPCwire's 10-year anniversary celebration, we thought the most appropriate starting point would be an interview series with Tom Tabor, publisher and founder of HPCwire.
Mike Bernhardt: It seems like you've been around HPC forever Tom. How long have you been personally involved in HPC?
Tom Tabor: This question really dates me. I was publisher of Supercomputing Review Magazine, which started covering HPC from practically the very beginning of commercial supercomputing. The first edition of Supercomputing Review was early fall of 1988. Coincidentally, the very first SC conference was held that year in Orlando, and the conference program and show directories were printed in Supercomputing Review -- we were the conference directory for that first show. So to answer your question, I've been active in the HPC space since 1988...for 14 yrs.
Bernhardt: I remember Supercomputing Review with much fondness. As you recall, I was one of your steady advertisers back then when I worked for Multiflow Computer. So, you started out with a monthly print magazine, but now HPCwire is exclusively electronic. When did you start experimenting with e-publishing?
Tabor: In 1989, with some early funding from ANS, an IBM-Sprint partnership, we started experimenting with Internet publishing. It was very sloppy back then. We were running a Unix bulletin board service, using a program we had written, on a server located in NYC. We would access this on a DEC workstation from San Diego on a 56k-modem dial-up. At that time, it was considered blazing speed, since most PC modems were running at 1200 BPS. We called the news service we were producing SuperNET. We were posting daily HPC news stories that people read for free, but they had to log on. What we quickly recognized was the passive nature of online publishing. The response to the bulletin board service was good, but getting readers to log on regularly was difficult. So we looked to the newspaper model -- if you deliver it to the front door they'll read it...but if you make them go to a newsstand, you'll lose readers. We began using e-mail to deliver our publication. Little did we know we were way ahead of the push technology publishing curve.
Bernhardt: So at one point you had both Supercomputing Review and an electronic publication. Why didn't you keep both?
Tabor: The Persian Gulf war and the depressed economy of the early '90s seriously hurt Supercomputing Review, as it was dependent on advertising revenues, but our electronic publishing effort was flourishing. After nearly 9 months of falling revenues, we sold off Supercomputing Review and re-launched the electronic version as HPCwire. That was back in August of 1992. Electronic publishing, delivered via e-mail is clearly the preferred medium in HPC as our steady growth and widespread acceptance has proven.
Bernhardt: What a departure from hard copy publishing! What was the response to the new electronic model?
Tabor: As I mentioned earlier, we were operating as a Unix dial-up news service and losing regular readers. At the time, I was trying to do two things: launch Tabor Griffin Communications (parent of HPCwire) and understand e-publishing. Like our readers, I found that regularly having to log onto the service was terribly inconvenient. I would miss some of our better stories. Even more embarrassing was when someone would comment to me about something we did, and I hadn't a clue as to what they were talking about. So I asked Matt Burns, our managing editor at the time, to send me a weekly email bulletin with abstracts of the top 10 articles, noting where on the service I would find them. I immediately realized how helpful this was and was prompted to show it to several colleagues -- Sid Karin, Jack Dongarra, Larry Smarr, all people whose opinions I valued. Their response was what I expected and within a month we launched the e-mail version of HPCwire. We shut down the bulletin board service and went exclusively with the e-mail version. That was February 1993. The following month we started charging a subscription fee for the e-mail publication.
That was a scary time for us. The thought of charging for Internet-delivered info was unconscionable... everything on the Net was free! If our readers responded negatively to this new paid model, we'd lose them all and consequently lose our business. But, I'm happy to report, our fears were never realized. Actually, just the opposite happened: for nearly 2 solid months our credit card machine never stopped processing subscriptions.
Bernhardt: This was well before buckets of dollars were flying around for net start-ups; how did you fund this new venture?
Tabor: We ran very lean in the early days and constantly looked for cost-cutting strategies. Basically, we started with my own investment and then we bootstrapped it with sales revenue.
Bernhardt: When you first launched HPCwire what was your initial readership and what is it now?
Tabor: As I recall, we started with a few hundred subscribers that quickly grew to a few thousand. Today, we estimate that HPCwire reaches approximately 50,000 readers worldwide each week. We've recently added a significant number of new readers through partnerships with many of the major conferences and through our new program for research centers. We expect our readership to significantly grow this year. We're projecting a 40 percent-plus growth in 2002.
Bernhardt: HPCwire has been covering HPC for such a long time, my guess is that your readership reads like a "who's-who". Can you tell us something about your readers?
Tabor: This is really the exciting part of the business: to see who in the world you influence. Our readership consists of many industry opinion makers both inside and outside of HPC. HPCwire's subscriber list does in fact read like a who's-who of the computing industry. We have site license subscriptions with all branches of the U.S. Government, including a number of White House offices, all the leading research and science centers, the leading Academic institutions around the world, many of the Fortune 500 organizations, and of course, pretty much all of the leading computer hardware and software manufacturers.
Bernhardt: On the vendor side, your list of advertisers also looks like a "who's-who" in HPC. Can you elaborate on your sponsors?
Tabor: No question about it, our sponsors are the leaders of HPC and are committed to the industry as a whole. If a company is not an HPCwire sponsor you can count on one of two things: either they are experiencing financial difficulties, or HPC is just a side business for them. Take a look at the list -- Intel, sgi, Sun, IBM, HP, Compaq, NEC, Fujitsu, Linux NetworX...just to name a few. These are the companies who deliver a broad array of HPC solutions.
Bernhardt: While we're discussing your business, do you have any competitors?
Tabor: There are many rogue sites that are published by either academicians or hobbyists, but nothing as committed to the industry as we are. Our real advantages in this space are that we've been doing this for nearly 14 years, we've partnered with all of the major conferences worldwide, and we reach all of the leaders of HPC. It's easy to put up a site. The real challenge, as in any publishing venture, is reaching the global readership. This component is easily the most difficult part of launching any publishing venture and takes the most time and resources. In all modesty, I believe HPCwire is considered the definitive source of news for the HPC industry.
Bernhardt: Looking back over the past 10 yrs, what are some of the major changes you've seen?
Tabor: Two things. One is clock speeds. In 1990, Supercomputing Review, for the first and only time, featured a system-related cover photo. Normally, we always featured a graphic image from a solution application. The system we featured was the Intel i860 chip that was touted as a supercomputer on a chip...it was a very big deal. The i860's clock speed was a blazing 33 MHz!
Today, processor speed and capability continue to be the basic building blocks driving the HPC industry forward.
The other change is the growth and greater use of HPC in mainstream applications. In 1992, you knew who was doing HPC-type work and research. Today, with the availability and reliability of technical computing and business intelligence computing, the industry is quite large. I often see analyst figures on the size of the industry, and I shake my head in wonder.
Bernhardt: As things continue to change, what do you see as the most crucial issues facing the HPC community?
Tabor: One of the most crucial issues is the need for globalization. There is an ever increasing need for "true" collaboration and distributed development teams. This requires highly efficient, high-speed, highly secure remote access. Infrastructures such as the TeraGrid and the inevitable "sub-grids" that will evolve will undoubtedly change the face of high performance computing.
Another issue of growing concern is narrowing the focus of research funding and the limits this will place on the uniqueness of cutting edge architecture pursued. There is real need to engage industry in the funding of research and the development of new technologies.
Bernhardt: Mergers and shake-outs have significantly altered the HPC vendor landscape during the past few years. Can this trend continue? How do you see HPC vendor strategy evolving over the coming years?
Tabor: The HPC market is not unique. The trend of mergers and shakeouts has been all around us.
Will this trend continue? Certainly. Generally speaking, as long as there are more than two players in the game, there will be positioning to better compete. As to how vendor strategy will evolve...I'm afraid my crystal ball isn't quite powerful enough to answer that with certainty. However, aside from day-to-day competition for business, there are a few issues vendors should be concerned with.
For example, there's consolidation. HPC is a very dynamic segment and consequently there will always be some interesting jockeying; vendors must concern themselves with the bigger fish they must contend with and the players who can be partners, or possibly go back and forth across the line as coopetition.
On the other hand, there's fall out. While there wasn't any of this in '99 and 2000, we will very likely lose companies over the next 18 months.
The Grid, though still in it's early stages, is a reality and will significantly change the landscape. This goes hand-in-hand with globalization. The Net has eliminated a lot of the disparity in the information available to researchers in different locations. Researchers worldwide are equally informed, communicate more freely, and share resources. The continued challenge is compatibility, file sharing capabilities, and security of remote access.
Finally, there is the potential impact of COTS. Intel is back in HPC with a powerful 64 bit chip and renewed, strong alliances with the major platform providers. On the one hand, this has the potential to drive up price performance and drive down profits, as more emphasis is placed on the high-end. However, Intel is all about volume, and no one knows it better than Intel. This means we'll see higher performance platforms -- initially with a higher price tag, but ultimately driving down the price of computing. Just compare the cost of a teraflop system five years ago to the cost of a teraflop system today.
Bernhardt: How do you view the present balance between academic and industrial HPC?
Tabor: When you talk about academic HPC, you're talking about research, as it should be. It goes without saying, the technology boundaries must always be pushed and expanded. In research, reliability and criticality shouldn't be the primary concern. If it isn't breaking down, you're not pushing the limits. ROI shouldn't even be a factor.
On the other hand, in industry it's all about ROI and reliability. If a system goes down, profit dollars spill onto the floor. Having said that, I think the balance lies with the each playing their role and sharing resources, more intimately than ever before.
Of course, this is much easier said than done, but much has already been done to affect this. There are quite a few academic/research centers out ther that are hungry to partner with industry. And it makes great sense -- they are the perfect test bed for industry to experiment with and integrate leading edge technology. If there is a forum that must be arranged, it is one for this purpose: connecting industry and the academic/research community. Two efforts to note: The Industrial Advisory Committee within the SCxy conference is working very hard on how best to engage the two segments, as is IDC's HPC User Forum.
Bernhardt: As we move into 2002, whom or what should we be watching?
Tabor: This month, for the first time ever, I will publish my list of the top 20 people to keep your eye on in 2002. I intend to publish this list at the beginning of each year, so be certain to read the February 15th issue!
In addition to that, a huge segment developing for HPC is bioinformatics, or BIO IT. Bio IT will be the single largest user of HPC systems in the very near future. Applications such as proteomics, system biology, and medical research will all require large computational resources.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.