Visit additional Tabor Communication Publications
January 26, 2011
When announced in 2006, the Cray XMT supercomputer attracted little attention. The machine was originally targeted for high-end data mining and analysis for a particular set of government clients in the intelligence community. While the feds have given the XMT support over the past five years, Cray is now looking to move these machines into the commercial sphere. And with the next generation XMT-2 on the horizon, the company is gearing up to accelerate that strategy in 2011.
From the company-wide standpoint, XMT is to big data-intensive applications what the Cray XT and XE product lines are to big science. The machine is made to deal with really huge datasets -- we're talking terabytes -- whether they be technical or non-technical in nature. But the XMT is actually designed for a specific flavor of data-intensive application: those that must deal with irregularly structured data at scale -- what are sometimes referred to as graph analytics problems.
These can be broken down further into two general categories. The first is the finding-the-needle-in-a-haystack problem, which involves locating a particular piece of information inside a huge dataset. The other is the connecting-the-dots problem, where you want to establish complex relationships in a cloud of seemingly unrelated data.
The most natural computational model for these types of applications is one in which thousands of computational threads inhabit a large global memory space. To further maximize performance, fine-grained thread synchronization is required. Broadly speaking, this model is not supported by more mundane cluster computing platforms as you might find with a traditional Oracle or Netezza database appliance. Unless the application can be partitioned naturally across cluster nodes and data access patterns are fairly regular, performance will suffer.
The encouraging news for XMT proponents is that over the last several years large-scale analytics applications using unstructured data have become much more mainstream. Areas such as intelligence/surveillance, protein folding, genomics, credit fraud detection, semantic searching, social networks analysis, computational geometry, scene recognition, and energy distribution all rely on large collections of unstructured data. As such the XMT is suitable for many high-end analytics applications in business intelligence, scientific research and Web search.
It's no coincidence that companies like Google, Facebook, and Amazon that use data mining are attracting the same scrutiny from civil libertarians that used to be reserved for the three-letter government agencies. They are now both running essentially the same applications. Businesses and governments alike want to sift through enormous databases in order to extract real-time intelligence, and that is nowhere more apparent than in the rise of the semantic Web.
In fact, social network analysis is one of the big application areas Cray is targeting for its XMT product -- that according to Shoaib Mufti, Cray's director of Knowledge Management. Mufti says search engines are moving toward more complex analysis, especially in the area of natural language processing. The goal here is to interpret the search input more precisely in order to deliver more accurate results. All of this processing has to be done interactively, which puts an enormous strain on conventional hardware.
For example, instead of delivering 1,000 pages of search results to sift through, a semantic search engine will only deliver a handful of the most relevant sites, or perhaps even just one. This is not mainstream technology today, but with the spread of mobile platforms (whose natural interface just happens to be spoken input), there will be an enormous demand for semantic searching. "We see a huge potential for XMT in providing value there," says Mufti.
There's also a big demand for graph type problems in the financial industry, such as the aforementioned area of fraud detection. In this case banks need to search through thousands or even millions of credit transactions looking for evidence of bogus activity. The volume of transactions and the need for real-time response is pushing this application beyond the bounds of conventional computing systems.
Conventional the XMT is not. The supercomputer has some stand-out features not found in other highly-parallel platforms. The most obvious is that it marries an extreme multithreading CPU, Cray's custom Threadstorm processor, with a high-capacity shared memory architecture. Many shared memory systems, such as SGI's Altix UV, are based on conventional x86 technology. Although a UV machine can offer up to 64 threads per node (with four 8-core CPUs), one Threadstorm chip supports 128 threads. Better yet, each Threadstorm draws just 30 watts, or about a third that of a high-end x86 CPU. In addition, the XMT supports fine-grained synchronization in the hardware, in order to hide latencies across the threads.
The underlying architecture is based on the Cray's mainstream XT platform, right down to the SeaStar2 interconnect and the AMD socket that Cray uses for the Threadstorm processors. In this way the company was able to reuse existing componentry, while at the same time providing a highly scalable platform for the Threadstorm technology. Today the system tops out at 8,024 processors, which can aggregate more than a million threads, and 64 terabytes of shared memory, the highest capacity of any such machine, says Mufti.
According to him, an XMT supercomputer can deliver 10 to 100 times better performance than conventional architectures on problems that exhibit irregular data access patterns. Making comparisons is somewhat problematic, though. There is as yet no widely accepted benchmark for graph problems. The new Graph 500 organization wants to fill that void, but that benchmark is still evolving. For the first Graph 500 results announced at SC10 last November, a 128-node XMT machine came in third place, beat out only by two much larger systems: an IBM Blue Gene/P (using 8,192 nodes) and a Cray XT4 (using 544 nodes).
Despite its computational muscle and its five-year history, the XMT business is still very much a work in progress. Mufti's Knowledge Management team, which oversees the XMT product, is run out of Cray's Custom Engineering division, a group that is focused on developing new business opportunities. Cray doesn't break out how much revenue is generated from XMT sales, and you'd be hard-pressed to find a dollar figure associated with any current deployment at government agencies or research labs. Nevertheless, the company must be gleaning enough sales to warrant on-going development.
Sometime later this year, Cray intends to launch XMT-2, the first system upgrade in five years. As it targets the broader market, Cray is also looking to make the machine easier to use. A lot of this will come via partnerships with software firms like Cambridge Semantics and Clark & Parsia, LLC, who are developing semantics tools and middleware for large-scale analytics.
For the XMT-2 system itself, Cray is focusing on scalability and TCO. Although not ready to release details, according to Mufti the next generation has scaled "significantly." This was done to accommodate the ever-growing problem size, especially in regard to database memory requirements. While this is yet to be confimed, it's logical to assume the new system will move up to the latest Gemini interconnect used in the XT and XE lines in order to take advantage of the increased performance. The next-generation Threadstorm processors will also likely benefit from smaller transistor geometries, allowing for better performance-per watt, more threads, or a little of both. Overall, says Mufti, XMT-2 will be denser as well as more more energy efficient, and the underlying technology "will be taken to the next level."
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.