Visit additional Tabor Communication Publications
October 26, 2011
A report in InfoWorld this week talks about the possibility of NVIDIA putting future Tegra chips into servers. Tegra is NVIDIA's first CPU-GPU microprocessor that is currently targeted at mobile devices like smartphones and tablets. The current generation integrates an ARM Cortex-A9 CPU and ultra low-power GeForce GPU on-chip.
The article quotes NVIDIA Tesla CTO, Steve Scott, who says future Tegra processor will incorporate the "Project Denver" architecture, which marries a custom ARM CPU implementation with some future GPU design. The idea behind Project Denver is to eventually use it in all of NVIDIA's product lines --Tegra, Quadro, GeForce, and Tesla -- spanning the client, server, and supercomputer markets.
The news here is that even the lowly Tegra chips may end up doing server duty. The implication is that a Denver-powered Tegra processor will be powerful enough to handle datacenter applications such as rendering and other visual computing types of work. And since the Tegra product line represents the low-power end of the portfolio, it may have special appeal for set-ups that are delivering these services in the cloud or other scaled-out infrastructures.
Presumably the higher end Denver-flavored Tesla products will be specifically targeted to more heavy-duty HPC work, and these processors will incorporate much more GPU silicon on them, and perhaps even more ARM cores, compared to the lower end products. None of this has been spec'ed out by NVIDIA at this point though.
These heterogeneous Denver chips will almost certainly be competing with AMD's APU (Fusion) processors, which are also expected to make their way into the datacenter in the next couple of years. And even Intel may come up with a CPU-GPU chip for the datacenter (the Sandy Bridge client-side products integrate GPUs today) So the choice may come down to an ARM-based heterogeneous processor versus a x86-based one. And while x86 has a leg up in this domain, Scott points out the legacy binary compatibility doesn't matter as much on the server side compared to the client side.
NVIDIA is pinning a lot of its hopes on the success of this heterogeneous design, as traditional discrete GPUs at the mid- and low-end of the spectrum get sucked up onto CPU platforms. With formidable companies like Intel and AMD, and even other ARM-based chip makers, melding CPU and GPU silicon, there will be plenty of competition for NVIDIA to contend with.
Full story at InfoWorld
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.