Visit additional Tabor Communication Publications
September 22, 2008
NEW YORK CITY – Considering the ongoing crisis in the financial markets, you might expect the mood at a gathering of people who make their living in the financial services industry to be kind of glum. Or at least very, very anxious.
Not so at today’s HPC on Wall Street conference in New York. On the local misery index, where 1 equals “Yankees lose” and 10 is “Yankees lose to Red Sox,” the mood here felt like about a 3. Of course, had this been a confab of brokers and traders, that number would probably be higher.
That’s not to say recent headlines weren’t on people’s minds. In just about every panel discussion or one-on-one conversation, there were references made to “the market chaos” or “last week’s turmoil.”
But this crowd of technology users and technology providers appeared to be thinking more about speed: Faster transactions, faster data feeds, faster analysis, faster reporting, faster applications. You just can’t get enough. As Peter Lankford, director of the Securities Technology Analysis Center (STAC), put it: “We used to trade in hundreds of milliseconds. Now we’re at the point where tens of milliseconds really matters. Requirements keep intensifying.”
It seemed like every other vendor was offering a cure for latency. Hardware to accelerate floating-point operations, messaging, storage, and the ticking of stock data was in relative abundance. Naturally the whole theme is about speeding up, but there seemed to be a proliferation of companies building accelerators out of ASICs and FPGAs. (More on that in a later report.)
Windows on Wall Street
Microsoft VP Bill Laing delivered the opening keynote to talk about how financial firms need to reduce risk and increase gains. Everyone seemed to be in agreement on this. That will require something like a rebuilding of the datacenter or at least some changes to the HPC infrastructure. Rumors that local comedian Jerry Seinfeld would show up to promote the big product of the day — Windows HPC Server 2008 — proved totally untrue, but Laing did a good job summarizing the new features and benefits, and a fellow from Lloyd’s TSB of London backed him up with real-world results. (More on that later too.) The upshot of Microsoft’s push is that HPC isn’t just right for Wall Street; it’s coming to Main Street too.
The software guys took some hits at several panel discussions. Apparently it’s the lack of parallel programming smarts that’s holding up progress. “We need more people who know parallelism,” said one panelist. The inability of developers to take advantage of all these cores being thrust upon us by Intel and other chip vendors was a concern voiced multiple times.
With the risky situation in the markets, risk analysis was a hot topic. One panelist said next year’s conference should be called Risk Management on Wall Street.
An ominous effect of a market downturn and upheaval in financial services was voiced by a guy from IBM’s brainiac division down in Raleigh, North Carolina: “There could be a slowdown in innovation because Wall Street drives much of the research and advances in high performance, low latency, security, high-speed networking, and so on.”
Maybe the most obvious impact of the recent Wall Street news was on the roster of speakers. As one conference organizer said, “A few speakers are missing because their companies no longer exist.”
Posted by Michael Feldman - September 21, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.