Visit additional Tabor Communication Publications
May 16, 2012
Bill Dally, NVIDIA’s chief scientist, spent some time speaking with EE Times at this year’s GPU Technology Conference (GTC) in San Jose, California. The two discussed 3D integrated circuits, the rise of China as a technology player, the state of US R&D spending and a possible alternative to Ethernet.
In the 3D chip realm, Dally sees a future where GPUs with several stacks of memory are integrated together. The design would potentially result in higher bandwidth communication, while consuming lower power per memory stack. But there seems to be a disagreement with NVIDIA and memory maker Micron, which looks to supply its own integrated 3D IC chip in the form of its Hybrid Memory Cube.
“What we want from a memory vendor is just the memory, and for them to please leave the processor design to us,” said Dally. “But they are trying to capture a bigger part of the value chain, so they insist on building a shim [logic] chip [as part of their memory stack].”
The current outlook for other memory alternatives is dim as Elpida, which has declared bankruptcy, was the only other vendor looking to supply 3D memory. Micron is interested in purchasing Elpida, so Dally is hoping a large manufacturer like Samsung will agree to create the needed memory.
When asked if China might build a rival GPU, the chief scientist didn’t deliver a direct answer, but described the country’s fast rate of progress as “frightening”. “Five years ago Godson was laughable,” said Dally. “Now it’s competent but not state of the art. If they continue, I would expect them to be matching the West in three to five years and then pulling ahead.”
Speaking to domestic R&D funding, Dally observed that the US has reduced investment in computing to a trickle. He said that innovation drives competitive products, which is only possible through government investment beyond what private companies are willing to spend. A specific example given was parallel computing and the general lack of knowledge surrounding the subject. Dally mentioned that programmers are still being educated to write serial code.
The conversation turned to Stanford and what Dally views as the University’s most promising research. He mentioned a program where researchers are looking to take supercomputing interconnect technology and deliver it to commercial datacenters. Stanford University has worked with Cray on the Dragonfly interconnect for the Cascade system and began pitching the technology to Google and Facebook. According to him, they loved the technology because of its low latency. The Stanford team plans to test the design on a small FPGA cluster and if everything goes as planned, they’ll start looking for a commercial adopter.
Full story at EE Times
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.